Ask the Directory Services Team articles Ask the Directory Services Team articles Sun, 24 Oct 2021 14:55:33 GMT AskDS 2021-10-24T14:55:33Z Active Directory Web Services Event 1202 <P>Hey there! Josh Mora here, with a brief post on an issue I recently had and wanted to make public, in hopes this will help those that run into this issue, and in addition, provide some helpful logging information that can be useful for any ADWS issues you might come across.</P> <P>&nbsp;</P> <P><STRONG><U>Scenario:</U></STRONG></P> <P>So, the issue I want to talk to you about: You have an AD LDS server, on which you are running ADWS, and you are constantly, minute after minute after minute, getting Event 1202 in the ADWS events with the following information:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="text">Log Name:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Active Directory Web Services Source:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ADWS Date:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5/05/2020 1:30:00 PM Event ID:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1202 Task Category: ADWS Instance Events Level:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Error Keywords:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Classic User:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A Computer:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Description: This computer is now hosting the specified directory instance, but Active Directory Web Services could not service it. Active Directory Web Services will retry this operation periodically. Directory instance: ADAM_INSTANCE Directory instance LDAP port: 389 Directory instance SSL port: 636</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><EM>&nbsp;</EM></P> <P>Now, this might not even be disrupting your services, everything may continue to work properly. However, this excessive logging of 1202 events can become annoying, and even troubling, since it very well could be indicating issues that you aren’t even aware of. So, let’s jump straight into how we can find the cause of this, and how we resolve it.</P> <P>&nbsp;</P> <P><STRONG><U>ADWS Debug Logging:</U></STRONG></P> <P>In this situation, I used the built-in functionality of ADWS Debug Logging. Enabling the debug logging consists in modifying the “Microsoft.ActiveDirectory.WebServices.exe.config” file, a file you can modify with different configuration parameters in order to achieve some extra functionality out of ADWS, information which is explained in this <A href="#" target="_blank" rel="noopener">Microsoft Documentation</A>. Unfortunately, that documentation doesn’t go over the parameters for enabling the Debug Logging, hence why I am posting this.</P> <P>&nbsp;</P> <P><U>Checking ADWS Configuration Information:</U></P> <P>Special thanks to Jason Bender, who put these two commands together that conveniently provide the configuration information from the ADWS Config file.</P> <P>&nbsp;</P> <OL> <LI>In a PowerShell window, run the following: [xml]$ADWSConfiguration = get-content -path c:\windows\adws\microsoft.activedirectory.webservices.exe.config</LI> <LI>Then, run: $ADWSConfiguration.configuration.appsettings.add</LI> <LI>You should get an output like this:</LI> </OL> <DIV id="tinyMceEditorRyan Ries_0" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P>&nbsp;</P> <P>&nbsp;</P> <P><U>Enabling ADWS Debug Logging: </U></P> <P>&nbsp;</P> <OL> <LI>Navigate to ‘C:\Windows\ADWS’. The file we are looking to modify is “Microsoft.ActiveDirectory.WebServices.Exe.Config”.</LI> </OL> <P>&nbsp;</P> <OL> <LI>Now, before making any changes, I strongly suggest to take a backup of the “Microsoft.ActiveDirectory.WebServices.Exe.Config” file. You can never be too safe!</LI> </OL> <P>&nbsp;</P> <OL> <LI>Right-click the file “Microsoft.ActiveDirectory.WebServices.Exe.Config”, then Open with, and select Notepad, or any other text editor. Right under &lt;appSettings&gt;, enter the following two lines:</LI> </OL> <P>&nbsp;</P> <P>&lt;add key="DebugLevel" value="Info"/&gt;</P> <P>&lt;add key="DebugLogFile" value="C:\Windows\Debug\ADWSlog.txt"/&gt;</P> <P>&nbsp;</P> <P>This Config file does not tolerate the smallest mistake, so make sure you do not have any typos or extra spaces.</P> <P>&nbsp;</P> <P>&nbsp;</P> <OL> <LI>Once the file has been modified, save the file and then restart the ADWS service for the changes to take effect.</LI> </OL> <P>&nbsp;</P> <OL> <LI>You can then run the PowerShell commands and should now be able to see the DebugLevel and DebugLogFile set.</LI> </OL> <P>&nbsp;</P> <P><U>Information to keep in mind: </U></P> <UL> <LI>Typos or extra spaces in the Config file can cause the ADWS service to fail to start with the following error: “<EM>Windows could not start the Active Directory Web Services service on Local Computer. Error 1053: The service did not respond to the start or control request in a timely fashion.</EM>”</LI> <LI>There are other debug levels for the DebugLevel parameter, including “None”, “Warn” and “Error”. However, the most helpful and informative is “Info”.</LI> <LI>The DebugLogFile location can be specified per your needs, it’s not a fixed location for the log file.</LI> <LI>This ADWS Debug Logging can log a lot of information when set to “Info”, so it’s suggested to only<STRONG> have this running while you are reproducing your issue, after which you should disable the logging</STRONG>, by deleting the lines that were added.</LI> </UL> <P>&nbsp;</P> <P><STRONG><U>Analyzing the ADWS Debug Log file:</U></STRONG></P> <P>To clarify, this blog is not a guide on overall analysis of the ADWS Debug Log file, but more focused on the issue at hand, the excessive 1202 events, so that’s what I will be addressing.</P> <P>&nbsp;</P> <P>The first we see, is the ScavengerThread waking up and begin looking for Instances:</P> <P>&nbsp;</P> <P><EM>LdapSessionPoolImplementation: [05/05/2020 1:29:40 PM] [8] ScavengerThread: woke up</EM></P> <P><EM>LdapSessionPoolImplementation: [05/05/2020 1:29:40 PM] [8] ScavengerThread: processing next pool</EM></P> <P><EM>ConnectionPool: [05/05/2020 1:29:40 PM] [8] GetReservationEnumerator: entering, instance=NTDS</EM></P> <P><EM>LdapSessionPoolImplementation: [05/05/2020 1:29:40 PM] [8] ScavengerThread: processing next pool</EM></P> <P><EM>ConnectionPool: [05/05/2020 1:29:40 PM] [8] GetReservationEnumerator: entering, instance=ADAM_INSTANCE</EM></P> <P><EM>LdapSessionPoolImplementation: [05/05/2020 1:29:40 PM] [8] Scavenger: waking up at 00:00:40 interval</EM></P> <P><EM>EnumerationContextCache: [05/05/2020 1:30:00 PM] [b] OnTimedEvent: got an event</EM></P> <P><EM>EnumerationContextCache: [05/05/2020 1:30:00 PM] [b] RemoveExpiredEntries called</EM></P> <P><EM>EnumerationContextCache: [05/05/2020 1:30:00 PM] [b] RemoveExpiredEntries -- found 0 entries to remove</EM></P> <P><EM>EnumerationContextCache: [05/05/2020 1:30:00 PM] [b] RemoveExpiredEntries done</EM></P> <P>&nbsp;</P> <P>Next, we see ADWS checking registry keys for NTDS, in order to determine if this Instance is actually servicing:</P> <P>&nbsp;</P> <P><EM>InstanceMap: [05/05/2020 1:31:00 PM] [d] CheckAndLoadNTDSInstance: entered</EM></P> <P><EM>InstanceMap: [05/05/2020 1:31:00 PM] [d] CheckAndLoadNTDSInstance: found NTDS Parameters key</EM></P> <P>&nbsp;</P> <P>At this point, ADWS has found that there is an NTDS Parameter registry key (which would contain all the NTDS settings), and due to the presence of this key, ADWS believes this is a Domain Controller providing ADDS services.</P> <P>So, now ADWS checks to see if we are indeed meeting basic requirements for providing ADDS services, more specifically if the server is providing Global Catalog services:</P> <P>&nbsp;</P> <P><EM>InstanceMap: [05/05/2020 1:31:00 PM] [d] CheckAndLoadGCInstance: entered</EM></P> <P><EM>InstanceMap: [05/05/2020 1:31:00 PM] [d] CheckForGlobalCatalog: entered</EM></P> <P><EM>DirectoryUtilities: [05/05/2020 1:31:00 PM] [d] GetTimeRemaining: remaining time is 00:02:00</EM></P> <P><EM>InstanceMap: [05/05/2020 1:31:01 PM] [d] CheckForGlobalCatalog: isGlobalCatalogReady: </EM></P> <P><EM>InstanceMap: [05/05/2020 1:31:01 PM] [d] GlobalCatalog is not ready to service the requests.</EM></P> <P><EM>InstanceMap: [05/05/2020 1:31:01 PM] [d] CheckAndLoadGCInstance: CheckForGlobalCatalog=<STRONG>False</STRONG></EM></P> <P>&nbsp;</P> <P>At this point, we can see the failure, which is triggering the event 1202.</P> <P>After this, ADWS moves on to checking ADAM Instances are ready for servicing, as well, however we no longer care for that “noise” in the log file, as we’ve found our problem.</P> <P>&nbsp;</P> <P><STRONG><U>Interpretation of the Data:</U></STRONG></P> <P>The data above tells us the following:</P> <P>&nbsp;</P> <UL> <LI>An NTDS Parameters registry key was found, therefor ADWS is aware NTDS Instance possibly exists on this server.</LI> <LI>Because of the previous point, ADWS now believes that this server is providing ADDS services (though it is not, it is an LDS server).</LI> <LI>Since ADWS believes this is a DC, it checked if Global Catalog is ready and/or if the ports are opened and servicing, however it found that this is false.</LI> </UL> <P>&nbsp;</P> <P>So, in simple words, ADWS was tricked into believing that this was a Domain Controller, however since it’s not a Domain Controller, the isGlobalCatalogReady/CheckForGlobalCatalog obviously failed.</P> <P>This triggers the Event 1202 to get logged, being logged every minute (because that is the default interval in which this check is performed).</P> <P>&nbsp;</P> <P><STRONG><U>Solution:</U></STRONG></P> <P>The solution in this case is very clear and simple. An AD LDS server is not supposed to have a Parameters key under NTDS, as it’s not a Domain Controller and should not/will not require any of the values specified under that key.</P> <P>&nbsp;</P> <P>Navigate to <STRONG>HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\</STRONG>, and delete the <STRONG>Parameters </STRONG>key.</P> <P>&nbsp;</P> <P><STRONG><U>A Similar Scenario:</U></STRONG></P> <P>There are other situations, in which the same 1202 event is logged, but perhaps the server is not an AD LDS server, but rather an actual Domain Controller. In these scenarios, the common solution is to delay the startup type for the ADWS service.</P> <P>This is because, in <EM>those</EM> cases, the issue is due to a “race condition”, where ADWS begins performing it’s checks before ADDS services has started, and therefor fails the check and logs the event. &nbsp;I have only seen this scenario with Domain Controllers running 2012 R2 and below.</P> <P>&nbsp;</P> <P>Thank you, and that's all for now!</P> <P>- Josh</P> Fri, 10 Jul 2020 14:08:52 GMT Ryan Ries 2020-07-10T14:08:52Z Deep Dive: Active Directory ESE Version Store Changes in Server 2019 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 02, 2018 </STRONG> <BR /> Hey everybody. <A href="#" target="_blank"> Ryan Ries </A> here to help you fellow AD ninjas celebrate the launch of Server 2019. <BR /> <BR /> Warning: As is my wont, this is a deep dive post. Make sure you've had your coffee before proceeding. <BR /> <BR /> Last month at Microsoft Ignite, many exciting new features rolling out in Server 2019 were talked about. (Watch MS Ignite sessions <A href="#" target="_blank"> here </A> and <A href="#" target="_blank"> here </A> .) <BR /> <BR /> But now I want to talk about an enhancement to on-premises Active Directory in Server 2019 that you won't read or hear anywhere else. This specific topic is near and dear to my heart personally. <BR /> <BR /> The intent of the first section of this article is to discuss how Active Directory’s sizing of the ESE version store has changed in Server 2019 going forward. The second section of this article will discuss some basic debugging techniques related to the ESE version store. <BR /> <BR /> Active Directory, also known as NT Directory Services (NTDS,) uses Extensible Storage Engine (ESE) technology as its underlying database. <BR /> <BR /> One component of all ESE database instances is known as the version store. The version store is an in-memory temporary storage location where ESE stores snapshots of the database during open transactions. This allows the database to roll back transactions and return to a previous state in case the transactions cannot be committed. When the version store is full, no more database transactions can be committed, which effectively brings NTDS to a halt. <BR /> <BR /> In 2016, the CSS Directory Services support team blog, (also known as AskDS,) published some previously undocumented (and some lightly-documented) internals regarding the ESE version store. Those new to the concept of the ESE version store should read <A href="#" target="_blank"> that blog post </A> first. <BR /> <BR /> In the blog post linked to previously, it was demonstrated how Active Directory had calculated the size of the ESE version store since AD’s introduction in Windows 2000. When the NTDS service first started, a complex algorithm was used to calculate version store size. This algorithm included the machine’s native pointer size, number of CPUs, version store page size (based on an assumption which was incorrect on 64-bit operating systems,) maximum number of simultaneous RPC calls allowed, maximum number of ESE sessions allowed per thread, and more. <BR /> <BR /> Since the version store is a memory resource, it follows that the most important factor in determining the optimal ESE version store size is the amount of physical memory in the machine, and that - ironically - seems to have been the only variable <EM> not </EM> considered in the equation! <BR /> <BR /> The way that Active Directory calculated the version store size did not age well. The original algorithm was written during a time when all machines running Windows were 32-bit, and even high-end server machines had maybe one or two gigabytes of RAM. <BR /> <BR /> As a result, many customers have contacted Microsoft Support over the years for issues arising on their domain controllers that could be attributed to or at least exacerbated by an undersized ESE version store. Furthermore, even though the default ESE version store size can be augmented by the " <A href="#" target="_blank"> EDB max ver pages (increment over the minimum) </A> " registry setting, customers are often hesitant to use the setting because it is a complex topic that warrants heavier and more generous amounts of documentation than what has traditionally been available. <BR /> <BR /> The algorithm is now greatly simplified in Server 2019: <BR /> <BR /> - <STRONG> When NTDS first starts, the ESE version store size is now calculated as 10% of physical RAM, with a minimum of 400MB and a maximum of 4GB. </STRONG> <BR /> <BR /> The same calculation applies to physical machines and virtual machines. In the case of virtual machines with dynamic memory, the calculation will be based off of the amount of "starting RAM" assigned to the VM. The "EDB max ver pages (increment over the minimum)" registry setting can still be used, as before, to add additional buckets over the default calculation. (Even beyond 4GB if desired.) The registry setting is in terms of "buckets," not bytes. Version store buckets are 32KB each on 64-bit systems. (They are 16KB on 32-bit systems, but Microsoft no longer supports any 32-bit server OSes.) Therefore, if one adds 5000 "buckets" by setting the registry entry to 5000 (decimal,) then 156MB will be added to the default version store size. A minimum of 400MB was chosen for backwards compatibility because when using the old algorithm, the default version store size for a DC with a single 64-bit CPU was ~410MB, regardless of how much memory it had. (There is no way to configure less than the minimum of 400MB, similar to previous Windows versions.) The advantage of the new algorithm is that now the version store size scales linearly with the amount of memory the domain controller has, when previously it did not. <BR /> <BR /> Defaults: <BR /> <TABLE> <TBODY><TR> <TD> Physical Memory in the Domain Controller </TD> <TD> Default ESE Version Store Size </TD> </TR> <TR> <TD> 1GB </TD> <TD> 400MB </TD> </TR> <TR> <TD> 2GB </TD> <TD> 400MB </TD> </TR> <TR> <TD> 3GB </TD> <TD> 400MB </TD> </TR> <TR> <TD> 4GB </TD> <TD> 400MB </TD> </TR> <TR> <TD> 5GB </TD> <TD> 500MB </TD> </TR> <TR> <TD> 6GB </TD> <TD> 600MB </TD> </TR> <TR> <TD> 8GB </TD> <TD> 800MB </TD> </TR> <TR> <TD> 12GB </TD> <TD> 1.2GB </TD> </TR> <TR> <TD> 24GB </TD> <TD> 2.4GB </TD> </TR> <TR> <TD> 48GB </TD> <TD> 4GB </TD> </TR> <TR> <TD> 128GB </TD> <TD> 4GB </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <BR /> This new calculation will result in larger default ESE version store sizes for domain controllers with greater than 4GB of physical memory when compared to the old algorithm. This means more version store space to process database transactions, and fewer cases of version store exhaustion. (Which means fewer customers needing to call us!) <BR /> <P> <EM> Note: This enhancement currently only exists in Server 2019 and there are not yet any plans to backport it to older Windows versions. </EM> </P> <BR /> <P> <EM> Note: This enhancement applies only to Active Directory and not to any other application that uses an ESE database such as Exchange, etc. </EM> </P> <BR /> <BR /> <BR /> <STRONG> ESE Version Store Advanced Debugging and Troubleshooting </STRONG> <BR /> <BR /> <BR /> <BR /> This section will cover some basic ESE version store triage, debugging and troubleshooting techniques. <BR /> <BR /> As covered in the AskDS blog post linked to previously, the performance counter used to see how many ESE version store buckets are currently in use is: <BR /> <BR /> \\.\Database ==&gt; Instances(lsass/NTDSA)\Version buckets allocated <BR /> <BR /> <BR /> <BR /> Once that counter has reached its limit, (~12,000 buckets or ~400MB by default,) events will be logged to the Directory Services event log, indicating the exhaustion: <BR /> <BR /> [caption id="attachment_17665" align="alignnone" width="594"] <IMG src="" /> <EM> Figure 1: NTDS version store exhaustion. </EM> [/caption] <BR /> <BR /> <BR /> <BR /> The event can also be viewed graphically in Performance Monitor: <BR /> <BR /> [caption id="attachment_17675" align="alignnone" width="982"] <IMG src="" /> <EM> Figure 2: The plateau at 12,314 means that the performance counter "Version Buckets Allocated" cannot go any higher. The flat line represents a dead patient. </EM> [/caption] <BR /> <BR /> <BR /> <BR /> As long as the domain controller still has available RAM, try increasing the version store size using the previously mentioned registry setting. Increase it in gradual increments until the domain controller is no longer exhausting the ESE version store, or the server has no more free RAM, whichever comes first. Keep in mind that the more memory that is used for version store, the less memory will be available for other resources such as the database cache, so a sensible balance must be struck to maintain optimal performance for <EM> your </EM> workload. (i.e. no one size fits all.) <BR /> <BR /> If the "Version Buckets Allocated" performance counter is still pegged at the maximum amount, then there is some further investigation that can be done using the debugger. <BR /> <BR /> The eventual goal will be to determine the nature of the activity within NTDS that is primarily responsible for exhausting the domain controller of all its version store, but first, some setup is required. <BR /> <BR /> First, generate a process memory dump of lsass on the domain controller while the machine is "in state" – that is, while the domain controller is at or near version store exhaustion. To do this, the "Create dump file" option can be used in Task Manager by right-clicking on the lsass process on the Details tab. Optionally, another tool such as Sysinternals’ procdump.exe can be used (with the -ma switch .) <BR /> <BR /> In case the issue is transient and only occurs when no one is watching, data collection can be configured on a trigger, using procdump with the -p switch. <BR /> <P> <EM> Note: Do not share lsass memory dump files with unauthorized persons, as these memory dumps can contain passwords and other sensitive data. </EM> </P> <BR /> <BR /> <BR /> It is a good idea to generate the dump after the Version Buckets Allocated performance counter has risen to an abnormally elevated level but before version store has plateaued completely. The reason why is because the database transaction responsible may be terminated once the exhaustion occurs, therefore the thread would no longer be present in the memory dump. If the guilty thread is no longer alive once the memory dump is taken, troubleshooting will be much more difficult. <BR /> <BR /> Next, gather a copy of %windir%\System32\esent.dll from the same Server 2019 domain controller. The esent.dll file contains a debugger extension, but it is highly dependent upon the correct Windows version, or else it could output incorrect results. It should match the same version of Windows as the memory dump file. <BR /> <BR /> Next, download WinDbg from the Microsoft Store, or from <A href="#" target="_blank"> this link </A> . <BR /> <BR /> Once WinDbg is installed, configure the symbol path for Microsoft’s public symbol server: <BR /> <BR /> [caption id="attachment_17685" align="alignnone" width="922"] <IMG src="" /> <EM> Figure 3: srv*c:\symbols*<A href="#" target="_blank"></A> </EM> [/caption] <BR /> <BR /> <BR /> <BR /> Now load the lsass.dmp memory dump file, and load the esent.dll module that you had previously collected from the same domain controller: <BR /> <BR /> [caption id="attachment_17695" align="alignnone" width="916"] <IMG src="" /> <EM> Figure 4: .load esent.dll </EM> [/caption] <BR /> <BR /> <BR /> <BR /> Now the ESE database instances present in this memory dump can be viewed with the command !ese dumpinsts: <BR /> <BR /> [caption id="attachment_17705" align="alignnone" width="995"] <IMG src="" /> <EM> Figure 5: !ese dumpinsts - The only ESE instance present in an lsass dump on a DC should be NTDSA. </EM> [/caption] <BR /> <BR /> <BR /> <BR /> Notice that the current version bucket usage is 11,189 out of 12,802 buckets total. The version store in this memory dump is very nearly exhausted. The database is not in a particularly healthy state at this moment. <BR /> <BR /> The command !ese param &lt;instance&gt; can also be used, specifying the same database instance gotten from the previous command, to see global configuration parameters for that ESE database instance. Notice that <A href="#" target="_blank"> JET_paramMaxVerPages </A> is set to 12800 buckets, which is 400MB worth of 32KB buckets: <BR /> <BR /> [caption id="attachment_17715" align="alignnone" width="1031"] <IMG src="" /> <EM> Figure 6: !ese param </EM> [/caption] <BR /> <BR /> <BR /> <BR /> To see much more detail regarding the ESE version store, use the !ese verstore &lt;instance&gt; command, specifying the same database instance: <BR /> <BR /> <BR /> <BR /> [caption id="attachment_17725" align="alignnone" width="711"] <IMG src="" /> <EM> Figure 7: !ese verstore </EM> [/caption] <BR /> <BR /> <BR /> <BR /> The output of the command above shows us that there is an open, long-running database transaction, how long it’s been running, and which thread started it. This also matches the same information displayed in the Directory Services event log event pictured previously. <BR /> <BR /> Neither the event log event nor the esent debugger extension were always quite so helpful; they have both been enhanced in recent versions of Windows. <BR /> <BR /> In older versions of the esent debugger extension, the thread ID could be found in the dwTrxContext field of the PIB, (command: !ese dump PIB 0x000001AD71621320) and the start time of the transaction could be found in m_trxidstack as a 64-bit file time. But now the debugger extension extracts that data automatically for convenience. <BR /> <BR /> Switch to the thread that was identified earlier and look at its call stack: <BR /> <BR /> [caption id="attachment_17735" align="alignnone" width="651"] <IMG src="" /> <EM> Figure 8: The guilty-looking thread responsible for the long-running database transaction. </EM> [/caption] <BR /> <BR /> <BR /> <BR /> The four functions that are highlighted by a red rectangle in the picture above are interesting, and here’s why: <BR /> <BR /> When an object is deleted on a domain controller, and that object has <A href="#" target="_blank"> links </A> to other objects, those links must also be deleted/cleaned by the domain controller. For example, when an Active Directory user becomes a member of a security group, a database link between the user and the group is created that represents that relationship. The same principle applies to all linked attributes in Active Directory. If the Active Directory Recycle Bin is enabled, then the link-cleaning process will be deferred until the deleted object surpasses its Deleted Object Lifetime – typically 60 or 180 days after being deleted. This is why, when the AD Recycle Bin is enabled, a deleted user can be easily restored with all of its group memberships still intact – because the user account object’s links are not cleaned until after its time in the Recycle Bin has expired. <BR /> <BR /> The trouble begins when an object with <EM> many </EM> backlinks is deleted. Some security groups, distribution lists, RODC password replication policies, etc., may contain hundreds of thousands or even millions of members. Deleting such an object will give the domain controller a lot of work to do. As you can see in the thread call stack shown above, the domain controller had been busily processing links on a deleted object for 47 seconds and still wasn’t done. All the while, more and more ESE version store space was being consumed. <BR /> <BR /> When the AD Recycle Bin is enabled, this can cause even more confusion, because no one remembers that they deleted that gigantic security group 6 months ago. A time bomb has been sitting in the AD Recycle Bin for months. But suddenly, AD replication grinds to a standstill throughout the domain and the admins are scrambling to figure out why. <BR /> <BR /> The performance counter "\\.\DirectoryServices ==&gt; Instances(NTDS)\Link Values Cleaned/sec" would also show increased activity during this time. <BR /> <BR /> There are two main ways to fight this: either by increasing version store size with the " <A href="#" target="_blank"> EDB max ver pages (increment over the minimum) </A> " registry setting, or by decreasing the batch size with the " <A href="#" target="_blank"> Links process batch size </A> " registry setting, or a combination of both. Domain controllers process the deletion of these links in batches. The smaller the batch size, the shorter the individual database transactions will be, thus relieving pressure on the ESE version store. <BR /> <BR /> Though the default values are properly-sized for almost all Active Directory deployments and most administrators should never have to worry about them, the two previously-mentioned registry settings are supported and well-informed enterprise administrators are encouraged to tweak the values – within reason – to avoid ESE version store depletion. Contact Microsoft customer support before making any modifications if there is any uncertainty. <BR /> <BR /> At this point, one could continue diving deeper, using various approaches (e.g. consider not only debugging process memory, but also consulting DS Object Access audit logs, object metadata from repadmin.exe, etc.) to find out which exact object with many thousands of links was just deleted, but in the end that’s a moot point. There’s nothing else that can be done with that information. The domain controller simply must complete the work of link processing. <BR /> <BR /> In other situations however, it will be apparent using the same techniques shown previously, that it’s an incoming LDAP query from some network client that’s performing inefficient queries, leading to version store exhaustion. Other times it will be DirSync clients. Other times it may be something else. In those instances, there may be more you can do besides just tweaking the version store variables, such as tracking down and silencing the offending network client(s), optimizing LDAP queries, creating database indices, etc.. <BR /> <BR /> <BR /> <BR /> Thanks for reading, <BR /> - Ryan "Where's My Ship-It Award" Ries </BODY></HTML> Fri, 05 Apr 2019 03:30:41 GMT Ryan Ries 2019-04-05T03:30:41Z TLS Handshake errors and connection timeouts? Maybe it’s the CTL engine…. <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Apr 10, 2018 </STRONG> <BR /> Hi There! <BR /> <BR /> Marius and Tolu from the Directory Services Escalation Team. <BR /> <BR /> Today, we’re going to talk about a little twist on some scenarios you may have come across at some point, where TLS connections fail or timeout for a variety of reasons. <BR /> <BR /> You’re probably already familiar with some of the usual suspects like cipher suite mismatches, certificate validation errors and TLS version incompatibility, to name a few. <BR /> <BR /> Here are just some examples for illustration (but there is a wealth of information out there) <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> <STRONG> Troubleshooting TLS 1.2 and Certificate Issue with Microsoft Message Analyzer: A Real World Example </STRONG> </A> </LI> <BR /> <LI> <A href="#" target="_blank"> <STRONG> TLS 1.2 handshake failure </STRONG> </A> </LI> <BR /> <LI> <STRONG> </STRONG> <A href="#" target="_blank"> <STRONG> Troubleshooting SSL related issues (Server Certificate) </STRONG> </A> </LI> <BR /> </UL> <BR /> Recently we’ve seen a number of cases with a variety of symptoms affecting different customers which all turned out to have a common root cause. <BR /> <BR /> We’ve managed to narrow it down to an unlikely source; a built-in OS feature working in its default configuration. <BR /> <BR /> We’re talking about the <STRONG> automatic root update and automatic disallowed roots update mechanisms </STRONG> based on CTLs. <BR /> <BR /> Starting with Windows Vista, root certificates are updated on Windows automatically. <BR /> <BR /> When a user on a Windows client visits a secure Web site (by using HTTPS/TLS), reads a secure email (S/MIME), or downloads an ActiveX control that is signed (code signing) and encounters a certificate which chains to a root certificate not present in the root store, Windows will automatically check the appropriate Microsoft Update location for the root certificate. <BR /> <BR /> If it finds it, it downloads it to the system. To the user, the experience is seamless; they don’t see any security dialog boxes or warnings and the download occurs automatically, behind the scenes. <BR /> <BR /> Additional information in: <BR /> <P> <A href="#" target="_blank"> <STRONG> How Root Certificate Distribution Works </STRONG> </A> </P> <BR /> During TLS handshakes, any certificate chains involved in the connection will need to be validated, and, from Windows Vista/2008 onwards, the automatic <EM> disallowed </EM> root update mechanism is also invoked to verify if there are any changes to the untrusted CTL (Certificate Trust List). <BR /> <BR /> A certificate trust list (CTL) is a predefined list of items that are authenticated and signed by a trusted entity. <BR /> <BR /> The mechanism is described in more detail in the following article: <BR /> <P> <A href="#" target="_blank"> <STRONG> An automatic updater of untrusted certificates is available for Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2 </STRONG> </A> </P> <BR /> It expands on the automatic root update mechanism technology (for trusted root certificates) mentioned earlier to let certificates that are compromised or are untrusted in some way be specifically flagged as untrusted. <BR /> <BR /> Customers therefore benefit from periodic automatic updates to <STRONG> both </STRONG> trusted and untrusted CTLs. <BR /> <BR /> So, after the preamble, what scenarios are we talking about today? <BR /> <BR /> Here are some examples of issues we’ve come across recently. <BR /> <BR /> <BR /> <BR /> 1) <BR /> <UL> <BR /> <LI> Your users may experience browser errors after several seconds when trying to browse to secure (https) websites behind a load balancer. </LI> <BR /> <LI> They might receive an error like "The page cannot be displayed. Turn on TLS 1.0, TLS 1.1, and TLS 1.2 in the Advanced settings and try connecting to <A href="#" target="_blank"></A> again.  If this error persists, contact your site administrator." </LI> <BR /> <LI> If they try to connect to the website via the IP address of the server hosting the site, the https connection works after showing a certificate name mismatch error. </LI> <BR /> <LI> All TLS versions ARE enabled when checking in the browser settings: </LI> <BR /> </UL> <BR /> [caption id="attachment_17615" align="alignnone" width="396"] <IMG src="" /> Internet Options[/caption] <BR /> <BR /> <BR /> <BR /> <STRONG> </STRONG> 2) <BR /> <UL> <BR /> <LI> You have a 3rd party appliance making TLS connections to a Domain Controller via LDAPs (Secure LDAP over SSL) which may experience delays of up to 15 seconds during the TLS handshake </LI> <BR /> <LI> The issue occurs randomly when connecting to any eligible DC in the environment targeted for authentication. </LI> <BR /> <LI> There are no intervening devices that filter or modify traffic between the appliance and the DCs </LI> <BR /> </UL> <BR /> 2a) <BR /> <UL> <BR /> <LI> A very similar scenario* to the above is in fact described in the following article by our esteemed colleague, Herbert: </LI> <BR /> </UL> <BR /> <P> <A href="#" target="_blank"> Understanding ATQ performance counters, yet another twist in the world of TLAs </A> </P> <BR /> <P> Where he details: </P> <BR /> <P> <STRONG> <EM> Scenario 2 </EM> </STRONG> </P> <BR /> <P> <EM> DC supports LDAP over SSL/TLS </EM> </P> <BR /> <P> <EM> A user sends a certificate on a session. The server need to check for certificate revocation which may take some time.* </EM> </P> <BR /> <P> <EM> This becomes problematic if network communication is restricted and the DC cannot reach the Certificate Distribution Point (CDP) for a certificate. </EM> </P> <BR /> <P> <EM> To determine if your clients are using secure LDAP (LDAPs), check the counter “LDAP New SSL Connections/sec”. </EM> </P> <BR /> <P> <EM> If there are a significant number of sessions, you might want to look at CAPI-Logging. </EM> </P> <BR /> <BR /> <BR /> 3) <BR /> <UL> <BR /> <LI> A 3 <SUP> rd </SUP> party meeting server performing LDAPs queries against a Domain Controller may fail the TLS handshake on the first attempt after surpassing a pre-configured timeout (e.g 5 seconds) on the application side </LI> <BR /> <LI> Subsequent connection attempts are successful </LI> <BR /> </UL> <BR /> <BR /> <BR /> So, what’s the story? Are these issues related in anyway? <BR /> <BR /> Well, as it turns out, they do have something in common. <BR /> <BR /> As we mentioned earlier, certificate chain validation occurs during TLS handshakes. <BR /> <BR /> Again, there is plenty of documentation on this subject, such as <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> <STRONG> TLS - SSL (Schannel SSP) Overview </STRONG> <STRONG> </STRONG> </A> </LI> <BR /> <LI> <A href="#" target="_blank"> <STRONG> Schannel Security Support Provider Technical Reference </STRONG> </A> </LI> <BR /> <LI> <A href="#" target="_blank"> <STRONG> How TLS/SSL Works: Logon and Authentication </STRONG> </A> </LI> <BR /> <LI> <A href="#" target="_blank"> <STRONG> Client Certificate Authentication (Part 1) </STRONG> </A> </LI> <BR /> </UL> <BR /> During certificate validation operations, the CTL engine gets periodically invoked to verify if there are any changes to the untrusted CTLs. <BR /> <BR /> In the example scenarios we described earlier, if the default public URLs for the CTLs are unreachable, and there is no alternative internal CTL distribution point configured (more on this in a minute), the TLS handshake will be delayed until the <EM> WinHttp </EM> call to access the default CTL URL times out. <BR /> <BR /> By default, this timeout is usually around 15 seconds, which can cause problems when load balancers or 3 <SUP> rd </SUP> party applications are involved and have their own (more aggressive) timeouts configured. <BR /> <BR /> If we enable CAPI2 Diagnostic logging, we should be able to see evidence of when and why the timeouts are occurring. <BR /> <BR /> We will see events like the following: <BR /> <BR /> <STRONG> Event ID 20 – Retrieve Third-Party Root Certificate from Network </STRONG> <STRONG> : </STRONG> <BR /> <BR /> <BR /> <UL> <BR /> <LI> <STRONG> Trusted CTL attempt </STRONG> <STRONG> </STRONG> </LI> <BR /> </UL> <BR /> [caption id="attachment_17625" align="alignnone" width="548"] <IMG src="" /> Trusted CTL Attempt[/caption] <BR /> <BR /> <STRONG> </STRONG> <BR /> <UL> <BR /> <LI> <STRONG> Disallowed CTL attempt </STRONG> <STRONG> </STRONG> </LI> <BR /> </UL> <BR /> <STRONG> </STRONG> <BR /> <BR /> [caption id="attachment_17635" align="alignnone" width="540"] <IMG src="" /> Disallowed CTL Attempt[/caption] <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <STRONG> Event ID 53 error message details showing that we have failed to access the disallowed CTL </STRONG> : <BR /> <BR /> <BR /> <BR /> [caption id="attachment_17645" align="alignnone" width="602"] <IMG src="" /> Event ID 53[/caption] <BR /> <BR /> <BR /> <BR /> The following article gives a more detailed overview of the <STRONG> CAPI2 </STRONG> diagnostics feature available on Windows systems, which is very useful when looking at any certificate validation operations occurring on the system: <BR /> <P> <A href="#" target="_blank"> <STRONG> Troubleshooting PKI Problems on Windows Vista </STRONG> </A> </P> <BR /> To help us confirm that the CTL updater engine is indeed affecting the TLS delays and timeouts we’ve described, we can temporarily disable it for both the trusted and untrusted CTLs and then attempt our TLS connections again. <BR /> <BR /> <BR /> <BR /> To disable it: <BR /> <UL> <BR /> <LI> <EM> Create a backup of this registry key (export and save a copy) </EM> </LI> <BR /> </UL> <BR /> <P> <STRONG> <EM> HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\SystemCertificates\AuthRoot </EM> </STRONG> </P> <BR /> <BR /> <UL> <BR /> <LI> <EM> Then create the following DWORD registry <STRONG> values </STRONG> under the key </EM> </LI> <BR /> </UL> <BR /> <P> <STRONG> <EM> "EnableDisallowedCertAutoUpdate"=dword:00000000 </EM> </STRONG> </P> <BR /> <P> <STRONG> <EM> "DisableRootAutoUpdate"=dword:00000001 </EM> </STRONG> </P> <BR /> <BR /> <BR /> After applying these steps, you should find that your previously failing TLS connections will no longer timeout. Your symptoms may vary slightly, but you should see speedier connection times, because we have eliminated the delay in trying and failing to reach the CTL URLs. <BR /> <BR /> So, what now? <BR /> <BR /> We should now <STRONG> REVERT </STRONG> the above registry changes by restoring the backup we created, and <STRONG> evaluate the following, </STRONG> <STRONG> more permanent solutions </STRONG> . <BR /> <BR /> We previously stated that disabling the updater engine should only be a temporary measure to confirm the root cause of the timeouts in the above scenarios. <BR /> <BR /> <BR /> <UL> <BR /> <LI> <STRONG> For the untrusted CTL: </STRONG> <STRONG> </STRONG> <STRONG> </STRONG> </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> The automatic disallowed root update mechanism is a built-in OS feature, so we can consider allowing access to the public Microsoft disallowed CTL URL from users’ machines; <A href="#" target="_blank"> </A> </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> OR, we can configure and maintain an internal untrusted CTL distribution point as outlined in <A href="#" target="_blank"> Configure Trusted Roots and Disallowed Certificates </A> </LI> <BR /> </UL> <BR /> <BR /> <UL> <BR /> <LI> <STRONG> For the trusted CTL: </STRONG> <STRONG> </STRONG> <STRONG> </STRONG> </LI> <BR /> </UL> <BR /> <UL> <BR /> <LI> For server systems you might consider deploying the trusted 3rd party CA certificates via GPO on an as needed basis </LI> <BR /> </UL> <BR /> <P> <A href="#" target="_blank"> Manage Trusted Root Certificates </A> </P> <BR /> <P> (particularly to avoid hitting the TLS protocol limitation described here: </P> <BR /> <P> <A href="#" target="_blank"> SSL/TLS communication problems after you install KB 931125 </A> ) </P> <BR /> <BR /> <UL> <BR /> <LI> For client systems, you should consider </LI> <BR /> </UL> <BR /> <P> Allowing access to the public allowed Microsoft CTL URL <A href="#" target="_blank"> </A> </P> <BR /> <P> OR </P> <BR /> <P> Defining and maintaining an internal trusted CTL distribution point as outlined in <A href="#" target="_blank"> Configure Trusted Roots and Disallowed Certificates </A> </P> <BR /> <P> OR </P> <BR /> <P> If you require a more granular control of which CAs are trusted by client machines, you can deploy the 3 <SUP> rd </SUP> Party CA certificates as needed via GPO </P> <BR /> <P> <A href="#" target="_blank"> Manage Trusted Root Certificates </A> </P> <BR /> So there you have it. We hope you found this interesting, and now have an additional factor to take into account when troubleshooting TLS/SSL communication failures. </BODY></HTML> Fri, 05 Apr 2019 03:29:23 GMT Ryan Ries 2019-04-05T03:29:23Z ESE Deep Dive: Part 1: The Anatomy of an ESE database <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Dec 04, 2017 </STRONG> <BR /> <P> hi! </P> <P> Get your crash helmets on and strap into your seatbelts for a JET engine / ESE database special... </P> <P> This is Linda Taylor, Senior AD Escalation Engineer from the UK here again. And WAIT...... I also somehow managed to persuade Brett Shirley to join me in this post. Brett is a Principal Software Engineer in the ESE Development team so you can be sure the information in this post is going to be deep and confusing but really interesting and useful and the kind you cannot find anywhere else :- ) <BR /> BTW, Brett used to write blogs before he grew up and got very busy. And just for fun, you might find this old <A href="#" target="_blank"> “Brett” classic </A> entertaining. I have never forgotten it. :- ) <BR /> Back to today's post...this will be a rather more grown up post, although we will talk about DITs but in a very scientific fashion. <BR /> <BR /> In this post, we will start from the ground up and dive deep into the overall file format of an ESE database file including practical skills with <STRONG> esentutl </STRONG> such as how to look at raw database pages. And as the title suggests this is Part1 so there will be more! </P> <H2> What is an ESE database? </H2> <P> Let’s start basic. The Extensible Storage Engine (ESE), also known as JET Blue, is a database engine from Microsoft that does not speak SQL. And Brett also says … For those with a historical bent, or from academia, and remember ‘before SQL’ instead of ‘NoSQL’ ESE is modelled after the <A href="#" target="_blank"> ISAMs (indexed sequential access method) </A> that were vogue in the mid-70s. ;-p <BR /> If you work with Active Directory (which you must do if you are reading this post :)</img> then you will (I hope!) know that it uses an ESE database. The respective binary being, <STRONG> esent.dll </STRONG> (or Brett loves exchange, it's <STRONG> <STRONG> ese.dll </STRONG> </STRONG> for the Exchange Server install). Applications like active directory are all ESE clients and use the <A href="#" target="_blank"> JET APIs </A> to access the ESE database. </P> <P> <IMG src="" /> </P> <P> This post will dive deep into the Blue parts above. The ESE side of things. AD is one huge client of ESE, but there are many other Windows components which use an ESE database (and non-Microsoft software too), so your knowledge in this area is actually very applicable for those other areas. Some examples are below: </P> <P> <IMG src="" /> </P> <H3> Tools </H3> <P> There are several built-in command line tools for looking into an ESE database and related files. </P> <OL> <LI> <P> <A href="#" target="_blank"> esentutl </A> . This is a tool that ships in Windows Server by default for use with Active Directory, Certificate Authority and any other built in ESE databases.&nbsp; This is what we will be using in this post and can be used to look at any ESE database. </P> </LI> </OL> <OL> <LI> <P> <A href="#" target="_blank"> eseutil </A> . This is the Exchange version of the same and gets installed typically in the Microsoft\Exchange\V15\Bin sub-directory of the Program Files directory. </P> </LI> </OL> <OL> <LI> <P> <A href="#" target="_blank"> ntdsutil </A> . Is a tool specifically for managing an AD or ADLDS databases and cannot be used with generic ESE databases (such as the one produced by Certificate Authority service).&nbsp; This is installed by default when you add the AD DS or ADLDS role. </P> </LI> </OL> <P> For read operations such as dumping file or log headers it doesn’t matter which tool you use. But for operations which write to the database you MUST use the matching tool for the application and version (for instance it is not safe to run <STRONG> esentutl /r </STRONG> from Windows Server 2016 on a Windows Server 2008 DB). Further throughout this article if you are looking at an Exchange database instead, you should use eseutil.exe instead of esentutl.exe. For AD and ADLDS always use ntdsutil or esentutl. They have different capabilities, so I use a mixture of both. And Brett says that If you think you can NOT keep the read operations straight from the write operations, play it safe and match the versions and application. </P> <P> During this post, we will use an AD database as our victim example. We may use other ones, like ADLDS for variety in later posts. </P> <H2> Database logical format - Tables </H2> <P> Let’s start with the logical format. From a logical perspective, an ESE database is a set of tables which have rows and columns and indices. </P> <P> Below is a visual of the list of tables from an AD database in Windows Server 2016. Different ESE databases will have different table names and use those tables in their own ways. </P> <P> <IMG src="" /> </P> <P> In this post, we won’t go into the detail about the DNTs, PDNTs and how to analyze an AD database dump taken with LDP because this is AD specific and here we are going to look at ESE specific level. Also, there are other blogs and sources where this has already been explained. for example, here on <A href="#" target="_blank"> AskPFEPlat. </A> However, if such post is wanted, tell me and I will endeavor to write one!! </P> <P> It is also worth noting that all ESE databases have a table called <STRONG> MSysObjects </STRONG> and <STRONG> MSysObjectsShadow </STRONG> which is a backup of <STRONG> MSysObjects. </STRONG> These are also known as “the catalog” of the database and they store metadata about client’s schema of the database – i.e. </P> <OL> <LI> <P> All the tables and their table names and where their associated <A href="#" target="_blank"> B+ trees </A> start in the database and other miscellaneous metadata. </P> </LI> </OL> <OL> <LI> <P> All the columns for each table and their names (of course), the <A href="#" target="_blank"> type of data </A> stored in them, and various schema constraints. </P> </LI> </OL> <OL> <LI> <P> All the indexes on the tables and their names, and where their associated B+ trees start in the database. </P> </LI> </OL> <P> This is the boot-strap information for ESE to be able to service client requests for opening tables to eventually retrieve rows of data. </P> <H2> Database physical format </H2> <P> From a physical perspective, an ESE database is just a file on disk. It is a collection of fixed size pages arranged into B+ tree structures. Every database has its page size stamped in the <STRONG> header </STRONG> (and it can vary between different clients, AD uses 8 KB). At a high level it looks like this: </P> <P> <IMG src="" /> </P> <P> The first <STRONG> <STRONG> “page” </STRONG> </STRONG> is the Header (H). </P> <P> The second “page” is a Shadow Header (SH) which is a copy of the header. </P> <P> However, in ESE “page number” (also frequently abbreviated “pgno”) has a very specific meaning (and often shows up in ESE events) and the first NUMBERED page of the actual database is page number / pgno 1 but is actually the third “page” (if you are counting from the beginning :-). </P> <P> From here on out though, we will not consider the header and shadow header proper pages, and page number 1 will be third page, at byte offset = &lt;page size&gt; * 2 = 8192 * 2 (for AD databases). </P> <P> If you don’t know the page size, you can dump the database header with <STRONG> esentutl /mh. </STRONG> </P> <P> Here is a dump of the header for an NTDS.DIT file – the AD database: </P> <P> <IMG src="" /> </P> <P> The page size is the <STRONG> cbDbPage. </STRONG> AD and ADLDS uses a page size of <STRONG> 8k </STRONG> . Other databases use different page sizes. </P> <P> A caveat is that to be able to do this, the database must not be in use. So, you’d have to stop the NTDS service on the DC or run esentutl on an offline copy of the database. </P> <P> But the good news is that in WS2016 and above we can now dump a LIVE DB header with the /vss switch! The command you need would be <STRONG> "esentutl /mh ntds.dit /vss” </STRONG> (note: must be run as administrator). </P> <P> All these numbered database pages logically are “owned” by various B+ trees where the actual data for the client is contained … and all these B+ trees have a “type of tree” and all of a tree’s pages have a “placement in the tree” flag (Root, or Leaf or implicitly Internal – if not root or leaf). </P> <P> Ok, Brett, that was “proper” tree and page talk -&nbsp; I think we need some pictures to show them... </P> <P> Logically the ownership / containing relationship looks like this: </P> <P> <IMG src="" /> </P> <H3> More about B+ Trees </H3> <P> The pages are in turn arranged into <A href="#" target="_blank"> B+ Trees. </A> Where top page is known as the ‘Root’ page and then the bottom pages are ‘Leaf’ pages where all the data is kept.&nbsp; Something like this (note this particular example does not show ‘Internal’ B+ tree pages): </P> <P> <IMG src="" /> </P> <UL> <LI> <P> The upper / parent page has partial keys indicating that all entries with 4245 + A* can be found in pgno 13, and all entries with 4245 + E* can be found in pgno 14, etc. </P> </LI> <LI> <P> Note this is a highly simplified representation of what ESE does … it’s a bit more complicated. </P> </LI> </UL> <UL> <LI> <P> This is not specific to ESE; many database engines have either B trees or B+ trees as a fundamental arrangement of data in their database files. </P> </LI> </UL> The Different trees <P> You should know that there are different types of B+ trees inside the ESE database that are needed for different purposes. These are: </P> <OL> <LI> <P> <STRONG> <STRONG> Data / Primary Trees </STRONG> </STRONG> – hold the table’s primary records which are used to store data for regular (and small) column data. </P> </LI> </OL> <OL> <LI> <P> <STRONG> Long Value (LV) Trees </STRONG> – used to store <A href="#" target="_blank"> long values </A> . In other words, large chunks of data which don't fit into the primary record. </P> </LI> </OL> <OL> <LI> <P> <STRONG> Index trees </STRONG> – these are B+Trees used to store indexes. </P> </LI> </OL> <OL> <LI> <P> <STRONG> Space Trees </STRONG> – these are used to track what pages are owned and free / available as new pages for a given B+ tree.&nbsp; Each of the previous three types of B+ Tree (Data, LV, and index), may (if the tree is large) have a set of two space trees associated with them. </P> </LI> </OL> Storing large records <P> Each Row of a table is limited to 8k (or whatever the page size is) in Active Directory and AD LDS. I.e. so each record has to fit into a single database page of 8k..but you are probably aware that you can fit a LOT more than 8k into an AD object or an exchange e-mail! So how do we store large records? </P> <P> Well, we have different types of columns as illustrated below: </P> <P> <IMG src="" /> </P> <P> Tagged columns can be split out into what we call the Long Value Tree. So in the tagged column we store a simple 4 byte number that’s called a LID (Long Value ID) which then points to an entry in the LV tree. So we take the large piece of data, break it up into small chunks and prefix those with the key for the LID and the offset. </P> <P> So, if every part of the record was a LID / pointer to a LV, then essentially we can fit 1300 LV pointers onto the 8k page. btw, this is what creates the <A href="#" target="_blank"> 1300 attribute limit in AD </A> . It’s all down to the ESE page size. </P> <P> Now you can also start to see that when you are looking at a whole AD object you may read pages from various trees to get all the information about your object. For example, for a user with many attributes and group memberships you may have to get data from a page in the ”datatable” \ Primary tree + “datatable” \ LV tree + sd_table \ Primary tree + link_table \ Primary tree. </P> Index Trees <P> An index is used for a couple of purposes. Firstly, to make a list of the records in an intelligent order, such as by surname in an alphabetical order. And then secondly to also cut down the number of records which sometimes greatly helps speed up searches (especially when the ‘selectivity is high’ – meaning few entries match). </P> <P> Below is a visual illustration (with the B+ trees turned on their side to make the diagram easier) of a primary index which is the DNT index in the AD Database – the Data Tree.&nbsp; And a secondary index of dNSHostName. You can see that the secondary index only contains the records which has a dNSHostName populated. It is smaller. </P> <P> <IMG src="" /> </P> <P> You can also see that in the secondary index, the primary key is the data portion (the name) and then the data is the actual Key that links us back to the REAL record itself. </P> <H3> Inside a Database page </H3> <P> Each database page has a fixed <STRONG> header </STRONG> . And the header has a <STRONG> checksum </STRONG> as well as other information like how much free space is on that page and which B-tree it belongs to. </P> <P> Then we have these things called <STRONG> TAGS (or nodes), </STRONG> which store the data. </P> <P> <STRONG> A node can be many things, such as a record in a database table or an entry in an index. </STRONG> </P> <P> The TAGS are actually out of order on the page, but order is established by the tag array at end. </P> <UL> <LI> <P> TAG 0 = Page External Header </P> </LI> </UL> <P> This contains variable sized special information on the page, depending upon the type of B-tree and type of page in B tree (space vs. regular tree, and root vs. leaf). </P> <UL> <LI> <P> TAG 1,2,3, etc are all “nodes” or lines, and the order is tracked. </P> </LI> </UL> <P> The key &amp; data is specific to the B Tree type. </P> <P> And TAG 1 is actually node 0!!! <STRONG> So here is a visual picture of what an ESE database page looks like: </STRONG> </P> <P> <IMG src="" /> </P> <P> It is possible to calculate this key if you have an object's primary key. In AD this is a DNT. </P> <P> The formulae for that (if you are ever crazy enough to need it) would be: </P> <UL> <LI> <P> Start with 0x7F, and if it is a signed INT append a 0x80000000 and then OR in the number </P> </LI> </UL> <UL> <LI> <P> For example 4248 –&gt; in hex 1098 –&gt; as key 7F80001098 (note 5 bytes). </P> </LI> <LI> <P> Note: Key buffer uses big endian, not little endian (like x86/amd64 arch). </P> </LI> <LI> <P> If it was a 64-bit int, just insert zeros in the middle (9 byte key). </P> </LI> </UL> <UL> <LI> <P> If it is an unsigned INT, start with 0x7F and just append the number. </P> </LI> <LI> <P> Note: Long Value (LID) trees and ESE’s Space Trees (pgno) are special, no 0x7F (4 byte keys). </P> </LI> <LI> <P> And finally other non-integers column types, such as String and Binary types, have a different more complicated formatting for keys. </P> </LI> </UL> <P> Why is this useful? Because, for example you can take a DNT of an object and then calculate its key and then seek to its page using esentutl.exe dump page /m functionality and /k option. </P> <P> The Nodes also look different (containing different data) depending on the ESE B+tree type. Below is an illustration of the different nodes in a Space tree, a Data Tree, a LV tree and an Index tree. </P> <P> <IMG src="" /> </P> <P> The green are the keys. The dark blue is data. </P> What does a REAL page look like? <P> You can use <STRONG> esentutl </STRONG> to dump pages of the database if you are investigating some corruption for example. </P> <P> Before we can dump a page, we want to find a page of interest (picking a random page could give you just a blank page) … so first we need some info about the table schema, so to start you can dump all the tables and their associated root page numbers like this : </P> <P> <IMG src="" /> </P> <P> Note, we have findstring’d the output again to get a nice view of just all the tables and their pgnoFDP and objidFDP. <STRONG> Findstr.exe </STRONG> is case sensitive so use the exact format or use /i switch. </P> <P> <STRONG> objidFDP </STRONG> identifies this table in the catalog metadata. When looking at a database page we can use its objidFDP to tell which table this page belongs to. </P> <P> <STRONG> pgnoFDP </STRONG> is the page number of the <STRONG> Father Data Page </STRONG> – the very top page of that <A href="#" target="_blank"> B+ tree </A> , also known as the root page.&nbsp; If you run esentutl /mm &lt;dbname&gt; on its own you will see a huge list of every table and B-tree (except internal “space” trees) including all the indexes. </P> <P> So, in this example page 31 is the root page of the datatable here. </P> Dumping a page <P> You can dump a page with esentutl using /m and /p. Below is an example of dumping page 31 from the database - the root page of the “datatable” table as above. </P> <P> <IMG src="" /> </P> <P> <IMG src="" /> </P> <P> The objidFDP is the number indicating which B-tree the page belongs to. And the cbFree tells us how much of this page is free. (cb = count of bytes). Each database page has a double header checksum – one ECC (Error Correcting Code) checksum for single bit data correction, and a higher fidelity XOR checksum to catch all other errors, including 3 or more bit errors that the ECC may not catch.&nbsp; In addition, we compute a logged data checksum from the page data, but this is not stored in the header, and only utilized by the Exchange 2016 Database Divergence Detection feature. </P> <P> You can see this is a root page and it has 3 nodes (4 TAGS – remember TAG1 is node 0 also known as line 0! :)</img> and it is nearly empty! (cbFree = 8092 bytes, so only 100 bytes used for these 3 nodes + page header + external header). </P> <P> The objidFDP tells us which B-Tree this page belongs to. </P> <P> And notice the PageFlushType, which is related to the JET Flush Map file we could talk about in another post later. </P> <P> The nodes here point to pages lower down in the tree. And we could dump a next level page (pgno: 1438)....and we can see them getting deeper and more spread out with more nodes. </P> <P> <IMG src="" /> </P> <P> <IMG src="" /> </P> <P> So you can see this page has 294 nodes! Which again all point to other pages. It is also a <STRONG> ParentOfLeaf </STRONG> meaning these pgno / page numbers actually point to leaf pages (with the final data on them). </P> <P> Are you bored yet? <IMG src="" /> </P> <P> Or are you enjoying this like a geek? either way, we are nearly done with the page internals and the tree climbing here. </P> <P> If you navigate more down, eventually you will get a page with some data on it like this for example, let's dump <STRONG> page 69 </STRONG> which TAG 6 is pointing to: </P> <P> <IMG src="" /> </P> <P> <IMG src="" /> </P> <P> So this one has some data on it (as indicated by the “Leaf page” indicator under the fFlags). </P> <P> Finally, you can also dump the data - the contents of a node (ie TAG) with the /n switch like this: </P> <P> <IMG src="" /> </P> <P> <STRONG> Remember: </STRONG> The <STRONG> <STRONG> /n </STRONG> </STRONG> specifier takes a <STRONG> pgno : line </STRONG> or node specifier … this means that the :3 here, dumped TAG 4 from the previous screen.&nbsp; And note that trying to dump “/n69:4” would actually fail. </P> <P> This /n will dump all the raw data on the page along with the information of columns and their contents and types. The output also needs some translation because it gives us the columnID (711 in the above example) and not the attribute name in AD (or whatever your database may be). The application developer would then be able to translate those column IDs to some meaningful information. For AD and ADLDS, we can translate those to attribute names using the source code. </P> <P> Finally, there really should be no need to do this in real life, other than in a situation where you are debugging a database problem. However, we hope this provided a good and ‘realistic’ demo to help understand and visualize the structure of an ESE database and how the data is stored inside it! </P> <P> Stay tuned for more parts .... which Brett says will be significantly more useful to everyday administrators! ;-) </P> <H2> The End! </H2> <P> Linda &amp; Brett </P> </BODY></HTML> Fri, 05 Apr 2019 03:28:41 GMT Lindakup 2019-04-05T03:28:41Z Introducing Lingering Object Liquidator v2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 09, 2017 </STRONG> <BR /> Greetings again&nbsp;AskDS! <BR /> <BR /> <A href="#" target="_blank"> Ryan Ries </A> here. Got something exciting to talk about. <BR /> <BR /> You might be familiar with the original <A href="#" target="_blank"> Lingering Object Liquidator </A> tool that was released a few years ago. <BR /> <BR /> Today,&nbsp;we're proud to announce&nbsp;version 2 of <A href="#" target="_blank"> Lingering Object Liquidator </A> ! <BR /> <BR /> Because Justin's blog post from 2014 covers the fundamentals of what lingering objects are so well, I don't think I need to go over it again here. If you need to know what lingering objects in Active Directory are, and why you want to get rid of them, then please go read <A href="#" target="_blank"> that post </A> first. <BR /> <BR /> The new version of the Lingering Object Liquidator tool began its life as an attempt to address some of the long-standing limitations of the old version. For example, the old version would just stop the entire&nbsp;scan&nbsp;when it encountered a single domain controller that was unreachable. The new version will just skip the unreachable DC and continue scanning the other DCs that are reachable. There are multiple other improvements in the tool as well, such as multithreading and more exhaustive logging. <BR /> <BR /> Before we take a look at the new tool, there are some things you should know: <BR /> <BR /> 1) Lingering Object Liquidator -&nbsp;neither the old version nor the new version -&nbsp;are covered by CSS Support Engineers. A small group of us (including yours truly) have provided this tool as a convenience to you, but it comes with no guarantees. If you find a problem with the tool, or have a feature request, drop a line to the public <A href="" target="_blank"> AskDS public email address </A> , or submit feedback to the <A href="#" target="_blank"> Windows Server UserVoice forum </A> , but please don't bother Support Engineers with it on a support case. <BR /> <BR /> 2) Don't immediately go into your production Active Directory forest and start wildly deleting things just because they show up as lingering objects in the tool.&nbsp;Please carefully review and consider any&nbsp;AD objects that are reported to be lingering objects before deleting. <BR /> <BR /> 3) The tool may report some false positives for deleted objects that are very close to the garbage collection age. To mitigate this issue, you can <A href="#" target="_blank"> manually initiate garbage collection </A> on your domain controllers before using this tool. (We may add this so the tool does it automatically in the future.) <BR /> <BR /> 4) The tool will continue to evolve and improve based on <B> your </B> feedback! Contact the AskDS alias or the Uservoice forum linked to in #1 above with any questions, concerns, bug reports or feature requests. <BR /> <H2> Graphical User Interface Elements </H2> <BR /> Let's begin by looking at the graphical user interface. Below is a legend that explains each UI element: <BR /> <BR /> [caption id="attachment_16956" align="alignnone" width="1044"] <IMG src="" /> Lingering Object Liquidator v2[/caption] <BR /> <BR /> A) "Help/About" label. Click this and a page should open up in your default web browser with extra information and detail regarding Lingering Object Liquidator. <BR /> <BR /> B) "Check for Updates" label. Click this and the tool will check for a newer version than the one you're currently running. <BR /> <BR /> C) "Detect AD Topology" button. This is the first button that should be clicked in most scenarios. The AD Topology must be generated first, before proceeding on to the later phases of lingering object detection and removal. <BR /> <BR /> D) "Naming Context" drop-down menu. (Naming Contexts are sometimes referred to as partitions.) Note that this drop-down menu is only available after AD Topology has been successfully discovered. It contains each Active Directory naming context in the forest. If you know precisely which Active Directory Naming context that you want to scan for lingering objects, you can select it from this menu. (Note: The Schema partition is omitted because it does not support deletion, so in theory it cannot contain lingering objects.) If you do not know which naming contexts may contain lingering objects, you can select the "[Scan All NCs]" option and the tool will scan each Naming Context that it was able to discover during the AD Topology phase. <BR /> <BR /> E) "Reference DC" drop-down menu. Note that this drop-down menu is only available after AD Topology has been successfully discovered. The reference DC is the "known-good" DC against which you will compare other domain controllers for lingering objects. If a domain controller contains AD objects that do not exist on the Reference DC, they will be considered lingering objects. If you select the "[Scan Entire Forest]" option, then the tool will (arbitrarily) select one global catalog from each domain in the forest that is known to be reachable. <B> It is recommended that you wisely choose a known-good DC yourself, because the tool doesn't necessarily know "the best" reference DC to pick. It will pick one at random. </B> <BR /> <BR /> F) "Target DC" drop-down menu. Note that this drop-down menu is only available after AD Topology has been successfully discovered. The Target DC is the domain controller that is suspected of containing lingering objects. The Target DC will be compared against the Reference DC, and each object that exists on the Target DC but not on the Reference DC is considered a lingering object. If you aren’t sure which DC(s) contain lingering objects, or just want to scan all domain controllers, select the "[Target All DCs]" option from the drop-down menu. <BR /> <BR /> G) "Detect Lingering Objects" button. Note that this button is only available after AD Topology has been successfully discovered. After you have made the appropriate selections in the three aforementioned drop-down menus, click the Detect Lingering Objects button to run the scan. Clicking this button only runs a scan; it does not delete anything. The tool will automatically detect and avoid certain nonsensical situations, such as the user specifying the same Reference and Target DCs, or selecting a Read-Only Domain Controller (RODC) as a Reference DC. <BR /> <BR /> H) "Select All" button. Note that this button does not become available until after lingering objects have been detected. Clicking it merely selects all rows from the table below. <BR /> <BR /> I) "Remove Selected Lingering Objects" button. This button will attempt to delete all lingering objects that have been detected by the detection process. You can select a range of items from the list using the shift key and the arrow keys. You can select and unselect specific items by holding down the control key and clicking on them. If you want to just select all items, click the "Select All" button. <BR /> <BR /> J) "Removal Method" radio buttons. These are mutually exclusive. You can choose which of the two supported methods you want to use to remove the lingering objects that have been detected. The "removeLingeringObject" method refers to the <A href="#" target="_blank"> rootDSE modify operation </A> , which can be used to "spot-remove" individual lingering objects. In contrast, the <A href="#" target="_blank"> DsReplicaVerifyObjects method </A> will remove <B> all </B> lingering objects all at once. This intention is reflected in the GUI by all lingering objects automatically being selected when the DsReplicaVerifyObjects method is chosen. <BR /> <BR /> K) "Import" button. This imports a previously-exported list of lingering objects. <BR /> <BR /> L) "Export" button. This exports the selected lingering objects to a file. <BR /> <BR /> M) The "Target DC Column." This column tells you which Target DC was seen to have contained the lingering object. <BR /> <BR /> N) The "Reference DC Column." This column tells you which Reference DC was used to determine that the object in question was lingering. <BR /> <BR /> O) The "Object DN Column." This column contains the distinguished name of the lingering object. <BR /> <BR /> P) The "Naming Context Column." This column contains the Naming Context that the lingering object resides in. <BR /> <BR /> Q) The "Lingering Object ListView". This "ListView" works similarly to a spreadsheet. It will display all lingering objects that were detected. You can think of each row as a lingering object. You can click on the column headers to sort the rows in ascending or descending order, and you can resize the columns to fit your needs. <B> NOTE: If you right-click on the lingering object listview, the selected lingering objects (if any) will be copied to your clipboard. </B> <BR /> <BR /> R) The "Status" box. The status box contains diagnostics and operational messages from the Lingering Object Liquidator tool. Everything that is logged to the status box in the GUI is also mirrored to a text log file. <BR /> <H2> User-configurable Settings </H2> <BR /> <BR /> <BR /> The user-configurable settings in Lingering Object Liquidator are alluded to in the Status box when the application first starts. <BR /> Key: HKLM\SOFTWARE\LingeringObjectLiquidator <BR /> Value: DetectionTimeoutPerDCSeconds <BR /> Type: DWORD <BR /> Default: 300 seconds <BR /> <BR /> This setting affects the “Detect Lingering Objects” scan. Lingering Object Liquidator establishes event log “subscriptions” to each Target DC that it needs to scan. The tool then waits for the DC to log an event (Event ID 1942 in the Directory Service event log) signaling that lingering object detection has completed for a specific naming context. Only once a certain number of those events (depending on your choices in the “Naming Contexts” drop-down menu,) have been received from the remote domain controller, does the tool know that particular domain controller has been fully scanned. However, there is an overall timeout, and if the tool does not receive the requisite number of Event ID 1942s in the allotted time, the tool “gives up” and proceeds to the next domain controller. <BR /> Key: HKLM\SOFTWARE\LingeringObjectLiquidator <BR /> Value: ThreadCount <BR /> Type: DWORD <BR /> Default: 8 threads <BR /> <BR /> This setting sets the maximum number of threads to use during the “Detect Lingering Objects” scan. Using more threads may decrease the overall time it takes to complete a scan, especially in very large environments. <BR /> <H2> Tips </H2> <BR /> The domain controllers must allow the network connectivity required for remote event log management for Lingering Object Liquidator to work. You can enable the required Windows Firewall rules using the following line of PowerShell: <BR /> Get-NetFirewallRule -DisplayName "Remote Event Log*" | Enable-NetFirewallRule <BR /> <BR /> Check the download site often for new versions! (There's also a handy "check for updates" option in the tool.) <BR /> <BR /> <BR /> <H2> Final Words </H2> <BR /> We provide this tool because we at AskDS want your Active Directory lingering object removal experience to go as smoothly as possible. If you find any bugs or have any feature requests, please drop a note to <A href="" target="_blank"> our public contact alias </A> . <BR /> <BR /> Download: <A href="#" target="_blank"> </A> <BR /> <BR /> <BR /> <H2> Release Notes </H2> <BR /> v2.0.19: <BR /> - Initial release to the public. <BR /> <BR /> v2.0.21: <BR /> - Added new radio buttons that allow the user more control over which lingering object removal method they want to use - the DsReplicaVerifyObjects method or removeLingeringObject method. <BR /> - Fixed issue with Export button not displaying the full path of the export file. <BR /> - Fixed crash when unexpected or corrupted data is returned from event log subscription. <BR /> </BODY></HTML> Fri, 05 Apr 2019 03:25:35 GMT Ryan Ries 2019-04-05T03:25:35Z Active Directory Experts: apply within <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jul 26, 2017 </STRONG> <BR /> <P> Hi all! Justin Turner here from the Directory Services team with a brief announcement: <B> We are hiring! </B> </P> <P> Would you like to join the U.S. Directory Services team and work on the most technically challenging and interesting Active Directory problems? Do you want to be the next <A href="#" target="_blank"> Ned Pyle </A> or <A href="#" target="_blank"> Linda Taylor </A> ? </P> <P> Then read more… </P> <P> We are an escalation team based out of Irving, Texas; Charlotte, North Carolina; and Fargo, North Dakota. We work with enterprise customers helping them resolve the most critical Active Directory infrastructure problems as well as enabling them to get the best of Microsoft Windows and Identity-related technologies. The work we do is no ordinary support – we work with a huge variety of customer environments and there are rarely two problems which are the same. </P> <P> You will need strong AD knowledge, strong troubleshooting skills, along with great collaboration, team work and customer service skills. </P> <P> If this sounds like you, please apply here: </P> <P> Irving, Texas (Las Colinas): <BR /> <A href="#" target="_blank">;pg=0&amp;so=&amp;rw=2&amp;jid=290399&amp;jlang=en&amp;pp=ss </A> </P> <P> Charlotte, North Carolina: <BR /> <A href="#" target="_blank">;pg=0&amp;so=&amp;rw=1&amp;jid=290398&amp;jlang=en&amp;pp=ss </A> </P> <P> U.S. citizenship is required for these positions. </P> <P> <BR /> </P> <P> Justin </P> </BODY></HTML> Fri, 05 Apr 2019 03:25:20 GMT JustinTurner 2019-04-05T03:25:20Z Using Debugging Tools to Find Token and Session Leaks <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Apr 05, 2017 </STRONG> <BR /> Hello AskDS readers and Identity aficionados. Long time no blog. <BR /> <BR /> <A href="#" target="_blank"> Ryan Ries </A> here, and today I have a relatively "hardcore" blog post that will not be for the faint of heart. However, it's about an important topic. <BR /> <BR /> The behavior surrounding security tokens and logon sessions has recently changed on all supported versions of Windows. IT professionals - developers and administrators alike - should understand what this new behavior is, how it can affect them, and how to troubleshoot it. <BR /> <BR /> But first, a little background... <BR /> <BR /> [caption id="attachment_16655" align="alignnone" width="394"] <IMG src="" /> Figure 1 - Tokens[/caption] <BR /> <BR /> Windows uses <A href="#" target="_blank"> security tokens </A> (or access tokens) extensively to control access to system resources. Every thread running on the system uses a security token, and may own several at a time. Threads inherit the security tokens of their parent processes by default, but they may also use special security tokens that represent other identities in an activity known as impersonation. Since security tokens are used to grant access to resources, they should be treated as highly sensitive, because if a malicious user can gain access to someone else's security token, they will be able to access resources that they would not normally be authorized to access. <BR /> <BR /> <EM> Note: Here are some additional references you should read first if you want to know more about access tokens: </EM> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> <EM> What's in a Token </EM> </A> </LI> <BR /> <LI> <A href="#" target="_blank"> <EM> What's in a Token Part 2 - Impersonation </EM> </A> </LI> <BR /> <LI> <I> <EM> <A href="#" target="_blank"> Windows Internals, 6 <SUP> th </SUP> Ed. Chapter 6 </A> </EM> </I> </LI> <BR /> </UL> <BR /> If you are an application developer, your application or service may want to create or duplicate tokens for the legitimate purpose of impersonating another user. A typical example would be a server application that wants to impersonate a client to verify that the client has permissions to access a file or database. The application or service must be diligent in how it handles these access tokens by releasing/destroying them as soon as they are no longer needed. If the code fails to call the <A href="#" target="_blank"> CloseHandle </A> function on a token handle, that token can then be "leaked" and remain in memory long after it is no longer needed. <BR /> <BR /> And that brings us to <A href="#" target="_blank"> Microsoft Security Bulletin MS16-111 </A> . <BR /> <BR /> Here is an excerpt from that Security Bulletin: <BR /> <BLOCKQUOTE> Multiple Windows session object elevation of privilege vulnerabilities exist in the way that Windows handles session objects. <BR /> <BR /> A locally authenticated attacker who successfully exploited the vulnerabilities could hijack the session of another user. <BR /> To exploit the vulnerabilities, the attacker could run a specially crafted application. <BR /> The update corrects how Windows handles session objects to prevent user session hijacking. </BLOCKQUOTE> <BR /> Those vulnerabilities were fixed with that update, and I won't further expound on the "hacking/exploiting" aspect of this topic. We're here to explore this from a debugging perspective. <BR /> <BR /> This update is significant because it changes how the relationship between tokens and logon sessions is treated across all supported versions of Windows going forward. Applications and services that erroneously leak tokens have always been with us, but the penalty paid for leaking tokens is now greater than before. After MS16-111, when security tokens are leaked, the logon sessions associated with those security tokens also remain on the system until all associated tokens are closed... even after the user has logged off the system. If the tokens associated with a given logon session are never released, then the system now also has a permanent logon session leak as well. If this leak happens often enough, such as on a busy Remote Desktop/Terminal Server where users are logging on and off frequently, it can lead to resource exhaustion on the server, performance issues and denial of service, ultimately causing the system to require a reboot to be returned to service. <BR /> <BR /> Therefore, it's more important than ever to be able to identify the symptoms of token and session leaks, track down token leaks on your systems, and get your application vendors to fix them. <BR /> <BR /> <B> How Do I Know If My Server Has Leaks? </B> <BR /> <BR /> As mentioned earlier, this problem affects heavily-utilized Remote Desktop Session Host servers the most, because users are constantly logging on and logging off the server. The issue is not limited to Remote Desktop servers, but symptoms will be most obvious there. <BR /> <BR /> Figuring out that you have logon session leaks is the easy part. Just run <A href="#" target="_blank"> qwinsta </A> at a command prompt: <BR /> <BR /> [caption id="attachment_16665" align="alignnone" width="621"] <IMG src="" /> Figure 2 - qwinsta[/caption] <BR /> <BR /> Pay close attention to the session ID numbers, and notice the large gap between session 2 and session 152. This is the clue that the server has a logon session leak problem. The next user that logs on will get session 153, the next user will get session 154, the next user will get session 155, and so on. But the session IDs will never be reused. We have 150 "leaked" sessions in the screenshot above, where no one is logged on to those sessions, no one will ever be able to log on to those sessions ever again (until a reboot,) yet they remain on the system indefinitely. This means each user who logs onto the system is inadvertently leaving tokens lying around in memory, probably because some application or service on the system duplicated the user's token and didn't release it. These leaked sessions will forever be unusable and soak up system resources. And the problem will only get worse as users continue to log on to the system. In an optimal situation where there were no leaks, sessions 3-151 would have been destroyed after the users logged out and the resources consumed by those sessions would then be reusable by subsequent logons. <BR /> <BR /> <B> How Do I Find Out Who's Responsible? </B> <BR /> <BR /> Now that you know you have a problem, next you need to track down the application or service that is responsible for leaking access tokens. When an access token is created, the token is associated to the logon session of the user who is represented by the token, and an internal reference count is incremented. The reference count is decremented whenever the token is destroyed. If the reference count never reaches zero, then the logon session is never destroyed or reused. Therefore, to resolve the logon session leak problem, you must resolve the underlying token leak problem(s). It's an all-or-nothing deal. If you fix 10 token leaks in your code but miss 1, the logon session leak will still be present as if you had fixed none. <BR /> <BR /> Before we proceed: I would recommend debugging this issue on a lab machine, rather than on a production machine. If you have a logon session leak problem on your production machine, but don't know where it's coming from, then install all the same software on a lab machine as you have on the production machine, and use that for your diagnostic efforts. You'll see in just a second why you probably don't want to do this in production. <BR /> <BR /> The first step to tracking down the token leaks is to enable token leak tracking on the system. <BR /> <BR /> Modify this registry setting: <BR /> HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Kernel <BR /> SeTokenLeakDiag = 1 (DWORD) <BR /> <BR /> The registry setting won't exist by default unless you've done this before, so create it. It also did not exist prior to MS16-111, so don't expect it to do anything if the system does not have MS16-111 installed. This registry setting enables extra accounting on token issuance that you will be able to detect in a debugger, and there may be a noticeable performance impact on busy servers. Therefore, it is not recommended to leave this setting in place unless you are actively debugging a problem. (i.e. don't do it in production exhibit A.) <BR /> <BR /> Prior to the existence of this registry setting, token leak tracing of this kind used to require using a checked build of Windows. And Microsoft seems to not be releasing a checked build of Server 2016, so... good timing. <BR /> <BR /> Next, you need to configure the server to take a full or kernel memory dump when it crashes. (A live kernel debug may also be an option, but that is outside the scope of this article.) I recommend using <A href="#" target="_blank"> DumpConfigurator </A> to configure the computer for complete crash dumps. A kernel dump should be enough to see most of what we need, but get a Complete dump if you can. <BR /> <BR /> [caption id="attachment_16675" align="alignnone" width="860"] <IMG src="" /> Figure 3 - DumpConfigurator[/caption] <BR /> <BR /> Then reboot the server for the settings to take effect. <BR /> <BR /> Next, you need users to log on and off the server, so that the logon session IDs continue to climb. Since you're doing this in a lab environment, you might want to use a script to automatically logon and logoff a set of test users. (I provided a sample script for you <A href="#" target="_blank"> here </A> .) Make sure you've waited 10 minutes after the users have logged off to verify that their logon sessions are permanently leaked before proceeding. <BR /> <BR /> Finally, crash the box. Yep, just crash it. (i.e. don't do it in production exhibit B.) On a physical machine, this can be done by hitting Right-Ctrl+Scroll+Scroll if you configured the appropriate setting with DumpConfigurator earlier. If this is a Hyper-V machine, you can use the following PowerShell cmdlet on the Hyper-V host: <BR /> Debug-VM -VM (Get-VM RDS1) -InjectNonMaskableInterrupt <BR /> You may have at your disposal other means of getting a non-maskable interrupt to the machine, such as an out-of-band management card (iLO/DRAC, etc.,) but the point is to deliver an NMI to the machine, and it will bugcheck and generate a memory dump. <BR /> <BR /> Now transfer the memory dump file (C:\Windows\Memory.dmp usually) to whatever workstation you will use to perform your analysis. <BR /> <BR /> <EM> Note: Memory dumps may contain sensitive information, such as passwords, so be mindful when sharing them with strangers. </EM> <BR /> <BR /> Next, install the Windows Debugging Tools on your workstation if they're not already installed. I downloaded mine for this demo from the Windows Insider Preview SDK <A href="#" target="_blank"> here </A> . But they also come with the SDK, the WDK, WPT, Visual Studio, etc. The more recent the version, the better. <BR /> <BR /> Next, download the <A href="#" target="_blank"> MEX Debugging Extension for WinDbg </A> . Engineers within Microsoft have been using the MEX debugger extension for years, but only recently has a public version of the extension been made available. The public version is stripped-down compared to the internal version, but it's still quite useful. Unpack the file and place mex.dll into your C:\Debuggers\winext directory, or wherever you installed WinDbg. <BR /> <BR /> Now, ensure that your symbol path is configured correctly to use the Microsoft public symbol server within WinDbg: <BR /> <BR /> [caption id="attachment_16685" align="alignnone" width="596"] <IMG src="" /> Figure 4 - Example Symbol Path in WinDbg[/caption] <BR /> <BR /> The example symbol path above tells WinDbg to download symbols from the specified URL, and store them in your local C:\Symbols directory. <BR /> <BR /> Finally, you are ready to open your crash dump in WinDbg: <BR /> <BR /> [caption id="attachment_16695" align="alignnone" width="415"] <IMG src="" /> Figure 5 - Open Crash Dump from WinDbg[/caption] <BR /> <BR /> After opening the crash dump, the first thing you'll want to do is load the MEX debugging extension that you downloaded earlier, by typing the command: <BR /> <BR /> [caption id="attachment_16705" align="alignnone" width="269"] <IMG src="" /> Figure 6 - .load mex[/caption] <BR /> <BR /> The next thing you probably want to do is start a log file. It will record everything that goes on during this debugging session, so that you can refer to it later in case you forgot what you did or where you left off. <BR /> <BR /> [caption id="attachment_16715" align="alignnone" width="652"] <IMG src="" /> Figure 7 - !logopen[/caption] <BR /> <BR /> Another useful command that is among the first things I always run is !DumpInfo, abbreviated !di, which simply gives some useful basic information about the memory dump itself, so that you can verify at a glance that you've got the correct dump file, which machine it came from and what type of memory dump it is. <BR /> <BR /> [caption id="attachment_16725" align="alignnone" width="620"] <IMG src="" /> Figure 8 - !DumpInfo[/caption] <BR /> <BR /> You're ready to start debugging. <BR /> <BR /> At this point, I have good news and I have bad news. <BR /> <BR /> The good news is that there already exists a super-handy debugger extension that lists all the logon session kernel objects, their associated token reference counts, what process was responsible for creating the token, and even the token creation stack, all with a single command! It's <BR /> <A href="#" target="_blank"> !kdexts.logonsession </A> , and it is <EM> awesome </EM> . <BR /> <BR /> The bad news is that it doesn't work... not with public symbols. It only works with private symbols. Here is what it looks like with public symbols: <BR /> <BR /> [caption id="attachment_16735" align="alignnone" width="1163"] <IMG src="" /> Figure 9 - !kdexts.logonsession - public symbols lead to lackluster output[/caption] <BR /> <BR /> As you can see, most of the useful stuff is zeroed out. <BR /> <BR /> Since public symbols are all you have unless you work at Microsoft, ( <A href="#" target="_blank"> and we wish you did </A> ,) I'm going to teach you how to do what <BR /> !kdexts.logonsession does, manually. The hard way. Plus some extra stuff. Buckle up. <BR /> <BR /> First, you should verify whether token leak tracking was turned on when this dump was taken. (That was the registry setting mentioned earlier.) <BR /> <BR /> [caption id="attachment_16745" align="alignnone" width="530"] <IMG src="" /> Figure 10 - x nt!SeTokenLeakTracking = &lt;no type information&gt;[/caption] <BR /> <BR /> OK... That was not very useful. We're getting &lt;no type information&gt; because we're using public symbols. But this symbol corresponds to the SeTokenLeakDiag registry setting that we configured earlier, and we know that's just 0 or 1, so we can just guess what type it is: <BR /> <BR /> [caption id="attachment_16755" align="alignnone" width="307"] <IMG src="" /> Figure 11 - db nt!SeTokenLeakTracking L1[/caption] <BR /> <BR /> The db command means "dump bytes." (dd, or "dump DWORDs," would have worked just as well.) You should have a symbol for <BR /> nt!SeTokenLeakTracking if you configured your symbol path properly, and the L1 tells the debugger to just dump the first byte it finds. It should be either 0 or 1. If it's 0, then the registry setting that we talked about earlier was not set properly, and you can basically just discard this dump file and get a new one. If it's 1, you're in business and may proceed. <BR /> <BR /> Next, you need to locate the logon session lists. <BR /> <BR /> [caption id="attachment_16765" align="alignnone" width="305"] <IMG src="" /> Figure 12 - dp nt!SepLogonSessions L1[/caption] <BR /> <BR /> Like the previous step, dp means "display pointer," then the name of the symbol, and L1 to just display a single pointer. The 64-bit value on the right is the pointer, and the 64-bit value on the left is the memory address of that pointer. <BR /> <BR /> Now we know where our lists of logon sessions begin. (Lists, plural.) <BR /> <BR /> The SepLogonSessions pointer points to not just a list, but an array of lists. These lists are made up of _SEP_LOGON_SESSION_REFERENCES structures. <BR /> <BR /> Using the dps command (display contiguous pointers) and specifying the beginning of the array that we got from the last step, we can now see where each of the lists in the array begins: <BR /> <BR /> [caption id="attachment_16775" align="alignnone" width="345"] <IMG src="" /> Figure 13 - dps 0xffffb808`3ea02650 – displaying pointers that point to the beginning of each list in the array[/caption] <BR /> <BR /> If there were not very many logon sessions on the system when the memory dump was taken, you might notice that not all the lists are populated: <BR /> <BR /> [caption id="attachment_16785" align="alignnone" width="337"] <IMG src="" /> Figure 14 - Some of the logon session lists are empty because not very many users had logged on in this example[/caption] <BR /> <BR /> The array doesn't fill up contiguously, which is a bummer. You'll have to skip over the empty lists. <BR /> <BR /> If we wanted to walk just the first list in the array (we'll talk more about dt and linked lists in just a minute,) it would look something like this: <BR /> <BR /> [caption id="attachment_16795" align="alignnone" width="721"] <IMG src="" /> Figure 15 - Walking the first list in the array and using !grep to filter the output[/caption] <BR /> <BR /> Notice that I used the !grep command to filter the output for the sake of brevity and readability. It's part of the Mex debugger extension. I told you it was handy. If you omit the !grep AccountName part, you would get the full, unfiltered output. I chose "AccountName" arbitrarily as a keyword because I knew that was a word that was unique to each element in the list. !grep will only display lines that contain the keyword(s) that you specify. <BR /> <BR /> Next, if we wanted to walk through the entire array of lists all at once, it might look something like this: <BR /> <BR /> [caption id="attachment_16797" align="alignnone" width="1031"] <IMG src="" /> Figure 16 - Walking through the entire array of lists![/caption] <BR /> <BR /> OK, I realize that I just went bananas there, but I'll explain what just happened step-by-step. <BR /> <BR /> When you are using the Mex debugger extension, you have access to many new text parsing and filtering commands that can truly enhance your debugging experience. When you look at a long command like the one I just showed, read it from right to left. The commands on the right are fed into the command to their left. <BR /> <BR /> So from right to left, let's start with !cut -f 2 dps ffffb808`3ea02650 <BR /> <BR /> We already showed what the dps &lt;address&gt; command did earlier. The !cut -f 2 command filters that command's output so that it only displays the second part of each line separated by whitespace. So essentially, it will display only the pointers themselves, and not their memory addresses. <BR /> <BR /> Like this: <BR /> <BR /> [caption id="attachment_16805" align="alignnone" width="324"] <IMG src="" /> Figure 17 - Using !cut to select just the second token in each line of output[/caption] <BR /> <BR /> Then that is "piped" line-by-line into the next command to the left, which was: <BR /> <BR /> !fel -x "dt nt!_SEP_LOGON_SESSION_REFERENCES @#Line -l Next" <BR /> <BR /> !fel is an abbreviation for !foreachline. <BR /> <BR /> This command instructs the debugger to execute the given command for each line of output supplied by the previous command, where the @#Line pseudo-variable represents the individual line of output. For each line of output that came from the dps command, we are going to use the dt command with the -l parameter to walk that list. (More on walking lists in just a second.) <BR /> <BR /> Next, we use the !grep command to filter all of that output so that only a single unique line is shown from each list element, as I showed earlier. <BR /> <BR /> Finally, we use the !count -q command to suppress all of the output generated up to that point, and instead only tell us how many lines of output it <EM> would have </EM> generated. This should be the total number of logon sessions on the system. <BR /> <BR /> And 380 was in fact the exact number of logon sessions on the computer when I collected this memory dump. (Refer to Figure 16.) <BR /> <BR /> Alright... now let's take a deep breath and a step back. We just walked an entire array of lists of structures with a single line of commands. But now we need to zoom in and take a closer look at the data structures contained within those lists. <BR /> <BR /> Remember, ffffb808`3ea02650 was the very beginning of the entire array. <BR /> <BR /> Let's examine just the very first _SEP_LOGON_SESSION_REFERENCES entry of the first list, to see what such a structure looks like: <BR /> <BR /> [caption id="attachment_16806" align="alignnone" width="705"] <IMG src="" /> Figure 18 - dt _SEP_LOGON_SESSION_REFERENCES* ffffb808`3ea02650[/caption] <BR /> <BR /> That's a logon session! <BR /> <BR /> Let's go over a few of the basic fields in this structure. (Skipping some of the more advanced ones.) <BR /> <UL> <BR /> <LI> <STRONG> Next </STRONG> : This is a pointer to the next element in the list. You might notice that there's a "Next," but there's no "Previous." So, you can only walk the list in one direction. This is a singly-linked list. </LI> <BR /> <LI> <STRONG> LogonId </STRONG> : Every logon gets a unique one. For example, "0x3e7" is always the "System" logon. </LI> <BR /> <LI> <STRONG> ReferenceCount </STRONG> : This is how many outstanding token references this logon session has. This is the number that must reach zero before the logon session can be destroyed. In our example, it's 4. </LI> <BR /> <LI> <STRONG> AccountName </STRONG> : The user who does or used to occupy this session. </LI> <BR /> <LI> <STRONG> AuthorityName </STRONG> : Will be the user's Active Directory domain, typically. Or the computer name if it's a local account. </LI> <BR /> <LI> <STRONG> TokenList </STRONG> : This is a doubly or circularly-linked list of the tokens that are associated with this logon session. The number of tokens in this list should match the ReferenceCount. </LI> <BR /> </UL> <BR /> The following is an illustration of a doubly-linked list: <BR /> <BR /> [caption id="attachment_16815" align="alignnone" width="567"] <IMG src="" /> Figure 19 - Doubly or circularly-linked list[/caption] <BR /> <BR /> "Flink" stands for Forward Link, and "Blink" stands for Back Link. <BR /> <BR /> So now that we understand that the TokenList member of the _SEP_LOGON_SESSION_REFERENCES structure is a linked list, here is how you walk that list: <BR /> <BR /> [caption id="attachment_16825" align="alignnone" width="862"] <IMG src="" /> Figure 20 - dt nt!_LIST_ENTRY* 0xffffb808`500bdba0+0x0b0 -l Flink[/caption] <BR /> <BR /> The dt command stands for "display type," followed by the symbol name of the type that you want to cast the following address to. The reason why we specified the address 0xffffb808`500bdba0 is because that is the address of the _SEP_LOGON_SESSION_REFERENCES object that we found earlier. The reason why we added +0x0b0 after the memory address is because that is the offset from the beginning of the structure at which the TokenList field begins. The -l parameter specifies that we're trying to walk a list, and finally you must specify a field name (Flink in this case) that tells the debugger which field to use to navigate to the next node in the list. <BR /> <BR /> We walked a list of tokens and what did we get? A list head and 4 data nodes, 5 entries total, which lines up with the ReferenceCount of 4 tokens that we saw earlier. One of the nodes won't have any data – that's the list head. <BR /> <BR /> Now, for each entry in the linked list, we can examine its data. We know the payloads that these list nodes carry are tokens, so we can use dt to cast them as such: <BR /> <BR /> [caption id="attachment_16835" align="alignnone" width="773"] <IMG src="" /> Figure 21 - dt _TOKEN*0xffffb808`4f565f40+8+8 - Examining the first token in the list[/caption] <BR /> <BR /> The reason for the +8+8 on the end is because that's the offset of the payload. It's just after the Flink and Blink as shown in Figure 19. You want to skip over them. <BR /> <BR /> We can see that this token is associated to SessionId 0x136/0n310. (Remember I had 380 leaked sessions in this dump.) If you examine the UserAndGroups member by clicking on its DML (click the link,) you can then use !sid to see the SID of the user this token represents: <BR /> <BR /> [caption id="attachment_16845" align="alignnone" width="962"] <IMG src="" /> Figure 22 - Using !sid to see the security identifier in the token[/caption] <BR /> <BR /> The token also has a DiagnosticInfo structure, which is super-interesting, and is the coolest thing that we unlocked when we set the SeTokenLeakDiag registry setting on the machine earlier. Let's look at it: <BR /> <BR /> [caption id="attachment_16846" align="alignnone" width="1096"] <IMG src="" /> Figure 23 - Examining the DiagnosticInfo structure of the first token[/caption] <BR /> <BR /> We now have the process ID and the thread ID that was responsible for creating this token! We could examine the ImageFileName, or we could use the ProcessCid to see who it is: <BR /> <BR /> [caption id="attachment_16855" align="alignnone" width="635"] <IMG src="" /> Figure 24 - Using !mex.tasklist to find a process by its PID[/caption] <BR /> <BR /> Oh... <EM> Whoops </EM> . Looks like this particular token leak is lsass's fault. You're just going to have to let the <EM> *ahem* </EM> application vendor take care of that one. <BR /> <BR /> Let's move on to a different token leak. We're moving on to a different memory dump file as well, so the memory addresses are going to be different from here on out. <BR /> <BR /> I created a special token-leaking application specifically for this article. It looks like this: <BR /> <BR /> [caption id="attachment_16856" align="alignnone" width="618"] <IMG src="" /> Figure 25 - RyansTokenGrabber.exe[/caption] <BR /> <BR /> It monitors the system for users logging on, and as soon as they do, it duplicates their token via the <A href="#" target="_blank"> DuplicateToken </A> API call. I purposely never release those tokens, so if I collect a memory dump of the machine while this is running, then evidence of the leak should be visible in the dump, using the same steps as before. <BR /> <BR /> Using the same debugging techniques I just demonstrated, I verified that I have leaked logon sessions in this memory dump as well, and each leaked session has an access token reference that looks like this: <BR /> <BR /> [caption id="attachment_16865" align="alignnone" width="628"] <IMG src="" /> Figure 26 - A _TOKEN structure shown with its attached DiagnosticInfo[/caption] <BR /> <BR /> And then by looking at the token's DiagnosticInfo, we find that the guilty party responsible for leaking this token is indeed RyansTokenGrabber.exe: <BR /> <BR /> [caption id="attachment_16875" align="alignnone" width="745"] <IMG src="" /> Figure 27 - The process responsible for leaking this token[/caption] <BR /> <BR /> By this point you know who to blame, and now you can go find the author of RyansTokenGrabber.exe, and show them the stone-cold evidence that you've collected about how their application is leaking access tokens, leading to logon session leaks, causing you to have to reboot your server every few days, which is a ridiculous and inconvenient thing to have to do, and you shouldn't stand for it! <BR /> <BR /> We're almost done. but I have one last trick to show you. <BR /> <BR /> If you examine the StackTrace member of the token's DiagnosticInfo, you'll see something like this: <BR /> <BR /> [caption id="attachment_16885" align="alignnone" width="1009"] <IMG src="" /> Figure 28 - DiagnosticInfo.CreateTrace[/caption] <BR /> <BR /> This is a stack trace. It's a snapshot of all the function calls that led up to this token's creation. These stack traces grew upwards, so the function at the top of the stack was called last. But the function addresses are not resolving. We must do a little more work to figure out the names of the functions. <BR /> <BR /> First, clean up the output of the stack trace: <BR /> <BR /> [caption id="attachment_16895" align="alignnone" width="738"] <IMG src="" /> Figure 29 - Using !grep and !cut to clean up the output[/caption] <BR /> <BR /> Now, using all the snazzy new Mex magic you've learned, see if you can unassemble (that's the u command) each address to see if resolves to a function name: <BR /> <BR /> [caption id="attachment_16896" align="alignnone" width="917"] <IMG src="" /> Figure 30 - Unassemble instructions at each address in the stack trace[/caption] <BR /> <BR /> The output continues beyond what I've shown above, but you get the idea. <BR /> <BR /> The function on top of the trace will almost always be SepDuplicateToken, but could also be SepCreateToken or SepFilterToken, and whether one creation method was used versus another could be a big hint as to where in the program's code to start searching for the token leak. You will find that the usefulness of these stacks will vary wildly from one scenario to the next, as things like inlined functions, lack of symbols, unloaded modules, and managed code all influence the integrity of the stack. However, you (or the developer of the application you're using) can use this information to figure out where the token is being created in this program, and fix the leak. <BR /> <BR /> Alright, that's it. If you're still reading this, then... thank you for hanging in there. I know this wasn't exactly a light read. <BR /> <BR /> And lastly, allow me to reiterate that this is not just a contrived, unrealistic scenario; There's a lot of software out there on the market that does this kind of thing. And if you happen to write such software, then I really hope you read this blog post. It may help you improve the quality of your software in the future. Windows needs application developers to be "good citizens" and avoid writing software with the ability to destabilize the operating system. Hopefully this blog post helps someone out there do just that. <BR /> <BR /> Until next time, <BR /> Ryan "Too Many Tokens" Ries </BODY></HTML> Fri, 05 Apr 2019 03:25:15 GMT Ryan Ries 2019-04-05T03:25:15Z Troubleshooting failed password changes after installing MS16-101 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 13, 2016 </STRONG> <BR /> Hi! <BR /> <BR /> Linda Taylor here, Senior Escalation Engineer in the Directory Services space. <BR /> <BR /> I have spent the last month working with customers worldwide who experienced password change failures after installing the updates under Ms16-101 security bulletin KB’s (listed below), as well as working with the product group in getting those addressed and documented in the public KB articles under the <B> known issues </B> section. It has been busy! <BR /> <BR /> In this post I will aim to provide you with a quick “cheat sheet” of known issues and needed actions as well as ideas and troubleshooting techniques to get there. <BR /> Let’s start by understanding the changes. <BR /> The following 6 articles describe the changes in MS16-101 as well as a list of <B> Known issues. </B> If you have not yet applied MS16-101 I would strongly recommend reading these and understanding how they may affect you. <BR /> <BR /> <A href="#" target="_blank"> 3176492 </A> Cumulative update for Windows 10: August 9, 2016 <BR /> <A href="#" target="_blank"> 3176493 </A> Cumulative update for Windows 10 Version 1511: August 9, 2016 <BR /> <A href="#" target="_blank"> 3176495 </A> Cumulative update for Windows 10 Version 1607: August 9, 2016 <BR /> <A href="#" target="_blank"> 3178465 </A> MS16-101: Security update for Windows authentication methods: August 9, 2016 <BR /> <A href="#" target="_blank"> 3167679 </A> MS16-101: Description of the security update for Windows authentication methods: August 9, 2016 <BR /> <A href="#" target="_blank"> 3177108 </A> MS16-101: Description of the security update for Windows authentication methods: August 9, 2016 <BR /> <BR /> <B> <STRONG> <STRONG> <STRONG> <STRONG> <STRONG> The good news </STRONG> </STRONG> </STRONG> </STRONG> </STRONG> is </B> that this month’s updates address some of the known issues with MS16-101. <BR /> <BR /> <B> The bad news </B> <STRONG> is </STRONG> that not all the issues are caused by some code defect in MS16-101 and in some cases the right solution is to make your environment more secure by ensuring that the password change can happen over Kerberos and does not need to fall back to NTLM. That may include opening TCP ports used by Kerberos, fixing other Kerberos problems like missing SPN’s or changing your application code to pass in a valid domain name. <BR /> <BR /> Let’s start with the basics… <BR /> Symptoms: <BR /> <BR /> <BR /> After applying MS16-101 fixes listed above, password changes may fail with the error code <BR /> <BR /> “The system detected a possible attempt to compromise security. Please make sure that you can contact the server that authenticated you.” <BR /> Or <BR /> “The system cannot contact a domain controller to service the authentication request. Please try again later.” <BR /> <BR /> This text maps to the error codes below: <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Hexadecimal </STRONG> </TD> <TD> <STRONG> Decimal </STRONG> </TD> <TD> <STRONG> Symbolic </STRONG> </TD> <TD> <B> <STRONG> <STRONG> <STRONG> Friendly </STRONG> </STRONG> </STRONG> </B> </TD> </TR> <TR> <TD> 0xc0000388 </TD> <TD> 1073740920 </TD> <TD> STATUS_DOWNGRADE_DETECTED </TD> <TD> The system detected a possible attempt to compromise security. Please make sure that you can contact the server that authenticated you. </TD> </TR> <TR> <TD> 0x80074f1 </TD> <TD> 1265 </TD> <TD> ERROR_DOWNGRADE_DETECTED </TD> <TD> The system detected a possible attempt to compromise security. Please make sure that you can contact the server that authenticated you. </TD> </TR> </TBODY></TABLE> <BR /> <TABLE> <TBODY><TR> <TD> </TD> </TR> </TBODY></TABLE> <BR /> <B> <STRONG> <STRONG> Question: What does MS16-101 do and why would password changes fail after installing it? </STRONG> </STRONG> </B> <BR /> <BR /> <B> <STRONG> Answer: </STRONG> </B> As documented in the listed KB articles, the security updates that are provided in MS16-101 disable the ability of the <A href="#" target="_blank"> Microsoft Negotiate SSP </A> to fall back to NTLM for password change operations in the case where Kerberos fails with the STATUS_NO_LOGON_SERVERS (0xc000005e) error code. <BR /> In this situation, the password change will now fail (post MS16-101) with the above mentioned error codes (ERROR_DOWNGRADE_DETECTED / STATUS_DOWNGRADE_DETECTED). <BR /> <STRONG> <STRONG> Important: </STRONG> </STRONG> Password RESET is not affected by MS16-101 at all in any scenario. Only password change using the Negotiate package is affected. <BR /> <BR /> So, now you understand the change, let’s look at the known issues and learn how to best identify and resolve those. <BR /> Summary and Cheat Sheet <BR /> <BR /> <BR /> To make it easier to follow I have matched the ordering of known issues in this post with the public KB articles above. <BR /> <BR /> First, when troubleshooting a failed password change post MS16-101 you will need to understand <I> HOW </I> and <I> WHERE </I> the password change is happening and if it is for a domain account or a local account. Here is a cheat sheet. <BR /> Summary of SCENARIO’s and a quick reference table of actions needed. <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Scenario / Known issue # </STRONG> </TD> <TD> <STRONG> Description </STRONG> </TD> <TD> <STRONG> Action Needed </STRONG> </TD> </TR> <TR> <TD> 1. </TD> <TD> Domain password change fails via CTRL+ALT+DEL and shows an error like this: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Text: “System detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you. “ </TD> <TD> Troubleshoot using this guide and fix Kerberos. </TD> </TR> <TR> <TD> 2. </TD> <TD> Domain password change fails via application code with an INCORRECT/UNEXPECTED Error code when a password which does not meet password complexity is entered. <BR /> <BR /> For example, before installing MS16-101, such password change may have returned a status like STATUS_PASSWORD_RESTRICTION and it now returns STATUS_DOWNGRADE_DETECTED (after installing Ms16-101) causing your application to behave in an expected way or even crash. <BR /> <BR /> Note: In these cases password change works ok when correct new password is entered that complies with the password policy. </TD> <TD> Install October fixes in the table below. </TD> </TR> <TR> <TD> 3. </TD> <TD> Local user account password change fails via CTRL+ALT+DEL or application code. </TD> <TD> Install October fixes in the table below. </TD> </TR> <TR> <TD> 4. </TD> <TD> Passwords for disabled and locked out user accounts cannot be changed using Negotiate method. </TD> <TD> None. By design. </TD> </TR> <TR> <TD> 5. </TD> <TD> Domain password change fails via application code when a good password is entered. <BR /> <BR /> This is the case where if you pass a servername to NetUserChangePassword, the password change will fail post MS16-101. This is because it would have previously worked and relied on NTLM. NTLM is insecure and Kerberos is always preferred. Therefore passing a domain name here is the way forward. <BR /> <BR /> One thing to note for this one is that most of the ADSI and C#/.NET changePassword API’s end up calling NetUserChangePassword under the hood. Therefore, also passing invalid domain names to these API’s will fail. I have provided a detailed walkthrough example in this post with log snippets. </TD> <TD> Troubleshoot using this guide and fix code to use Kerberos. </TD> </TR> <TR> <TD> 6. </TD> <TD> After you install MS 16-101 update, you may encounter 0xC0000022 NTLM authentication errors. </TD> <TD> To resolve this issue, see <A href="#" target="_blank"> KB3195799 </A> NTLM authentication fails with 0xC0000022 error for Windows Server 2012, Windows 8.1, and Windows Server 2012 R2 after update is applied. </TD> </TR> <TR> <TD> 7. </TD> <TD> After you install the security updates that are described in MS16-101, remote, programmatic changes of a local user account password remotely, and password changes across untrusted forest fail with the STATUS_DOWNGRADE_DETECTED error as documented in this post. <BR /> <BR /> This happens because the operation relies on NTLM fall-back since there is no Kerberos without a trust.&nbsp; NTLM fall-back is forbidden by MS16-101. </TD> <TD> For this scenario you will need to install October fixes in the table below and set the registry key <B> NegoAllowNtlmPwdChangeFallback </B> documented in KB’s below which allows the NTLM fall back to happen again and unblocks this scenario. <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <A href="#" target="_blank"> </A> <BR /> <A href="#" target="_blank"> </A> <BR /> <A href="#" target="_blank"> </A> <BR /> <A href="#" target="_blank"> </A> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> <STRONG> Note </STRONG> : you may also consider using this registry key in an emergency for Known Issue#5 when it takes time to update the application code. However please read the above articles carefully and only consider this as a short term solution for scenario 5. </TD> </TR> </TBODY></TABLE> <BR /> <I> </I> </TD> </TR> <TR> <TD> </TD> </TR> </TBODY></TABLE> <BR /> Table of Fixes for known issues above release 2016.10.11, taken from <A href="#" target="_blank"> MS16-101 Security Bulletin </A> : <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <B> OS </B> </TD> <TD> <B> Fix needed </B> </TD> </TR> <TR> <TD> <B> <STRONG> Vista / W2K8 </STRONG> </B> </TD> <TD> Re-install <A href="#" target="_blank"> 3167679 </A> , re-released 2016.10.11 </TD> </TR> <TR> <TD> <B> Win7 / W2K8 R2 </B> </TD> <TD> Install <A href="#" target="_blank"> 3192391 </A> (security only) <BR /> or <BR /> Install <A href="#" target="_blank"> 3185330 </A> (monthly rollup that includes security fixes) </TD> </TR> <TR> <TD> <B> WS12 </B> </TD> <TD> <A href="#" target="_blank"> 3192393 </A> (security only) <BR /> or <BR /> <A href="#" target="_blank"> 3185332 </A> (monthly rollup that includes security fixes) </TD> </TR> <TR> <TD> <B> Win8.1 / WS12 R2 </B> </TD> <TD> <A href="#" target="_blank"> 3192392 </A> (security only) <BR /> OR <BR /> <A href="#" target="_blank"> 3185331 </A> ((monthly rollup that includes security fixes) </TD> </TR> <TR> <TD> <B> Windows 10 </B> </TD> <TD> <B> For 1511: </B> <A href="#" target="_blank"> 3192441 </A> Cumulative update for Windows 10 Version 1511: October 11, 2016 <BR /> <B> For 1607: </B> <A href="#" target="_blank"> 3194798 </A> Cumulative update for Windows 10 Version 1607 and Windows Server 2016: October 11, 2016 </TD> </TR> </TBODY></TABLE> <BR /> Troubleshooting <BR /> As I mentioned, this post is intended to support the documentation of the <B> known issues </B> in the Ms16-101 KB articles and provide help and guidance for troubleshooting. It should help you identify which known issue you are experiencing as well as provide resolution suggestions for each case. <BR /> <BR /> I have also included a troubleshooting walkthrough of some of the more complex example cases. We will start with the problem definition, and then look at the available logs and tools to identify a suitable resolution. The idea is to teach “how to fish” because there can be many different scenario’s and hopefully you can apply these techniques and use the log files documented here to help resolve the issues when needed. <BR /> <BR /> Once you know the scenario that you are using for the password change the next step is usually to collect some data on the server or client where the password change is occuring. For example if you have a web server running a password change application and doing password changes on behalf of users, you will need to collect the logs there. If in doubt collect the logs from all involved machines and then look for the right one doing the password change using the snippets in the examples. Here are the helpful logs. <BR /> <BR /> <B> <STRONG> DATA COLLECTION </STRONG> </B> <BR /> <BR /> The same logs will help in all the scenario’s. <BR /> <BR /> <STRONG> <STRONG> LOGS </STRONG> </STRONG> <BR /> <BR /> <STRONG> 1 <STRONG> . SPENGO debug log/ LSASS.log </STRONG> </STRONG> <B> </B> <BR /> <BR /> To enable this log run the following commands from an elevated admin CMD prompt to set the below registry keys: <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> reg add HKLM\SYSTEM\CurrentControlSet\Control\LSA /v SPMInfoLevel /t REG_DWORD /d 0xC03E3F /f <BR /> reg add HKLM\SYSTEM\CurrentControlSet\Control\LSA /v LogToFile /t REG_DWORD /d 1 /f <BR /> reg add HKLM\SYSTEM\CurrentControlSet\Control\LSA /v NegEventMask /t REG_DWORD /d 0xF /f </STRONG> </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <UL> <BR /> <LI> This will log Negotiate debug output to the %windir%\system32 <B> \lsass.log. </B> </LI> <BR /> <LI> There is no need for reboot. The log is effective immediately. </LI> <BR /> <LI> Lsass.log is a text file that is easy to read with a text editor such as Wordpad. </LI> <BR /> </UL> <BR /> <STRONG> <STRONG> 2. Netlogon.log: </STRONG> </STRONG> <BR /> <BR /> This log has been around for many years and is useful for troubleshooting DC LOCATOR traffic. It can be used together with a network trace to understand why the STATUS_NO_LOGON_SERVERS is being returned for the Kerberos password change attempt. <BR /> <BR /> · To enable Netlogon debug logging run the following command from an elevated CMD prompt: <BR /> <BR /> nltest /dbflag:0x26FFFFFF <BR /> <BR /> · The resulting log is found in %windir%\debug\ <B> netlogon.log </B> &amp; <B> netlogon.bak </B> <BR /> <BR /> · There is no need for reboot. The log is effective immediately. See also <A href="#" target="_blank"> 109626 </A> Enabling debug logging for the Net Logon service <BR /> <BR /> · The <B> Netlogon.log </B> (and Netlogon.bak) is a text file. <BR /> <BR /> Open the log with any text editor (I like good old Notepad.exe) <BR /> <BR /> 3. Collect a <B> <STRONG> Network trace </STRONG> </B> during the password change issue using the tool of your choice. <BR /> Scenario’s, Explanations and Walkthrough’s: <BR /> When reading this you should keep in mind that you may be seeing more than one scenario. The best thing to do is to start with one, fix that and see if there are any other problems left. <BR /> 1. Domain password change fails via CTRL+ALT+DEL <BR /> This is most likely a Kerberos DC locator failure of some kind where the password changes were relying on NTLM before installing MS16-101 and are now failing. This is the simplest and easiest case to resolve using basic Kerberos troubleshooting methods. <BR /> <BR /> <B> <STRONG> <STRONG> Solution: </STRONG> </STRONG> </B> Fix Kerberos. <B> </B> <BR /> <BR /> <B> Some tips from cases which we saw: </B> <BR /> <BR /> 1. Use the Network trace to identify if the necessary communication ports are open. This was quite a common issue. So start by checking this. <BR /> <BR /> In order for Kerberos password changes to work communication on TCP port 464 needs to be open between the client doing the <BR /> password change and the domain controller. <BR /> <BR /> Note on RODC: Read-only domain controllers (RODCs) can service password changes if the user is allowed by the RODCs password replication policy. Users who are not allowed by the RODC password policy require network connectivity to a read/write domain controller (RWDC) in the user account domain to be able to change the password. <BR /> <BR /> <STRONG> To check whether TCP port 464 is open, follow these steps (also documented in KB </STRONG> <A href="#" target="_blank"> <STRONG> 3167679 </STRONG> </A> <STRONG> ): </STRONG> <BR /> <BR /> a. Create an equivalent display filter for your network monitor parser. For example: <BR /> <BR /> <STRONG> ipv4.address== &lt;ip address of client&gt; &amp;&amp; tcp.port==464 </STRONG> <BR /> <BR /> b. In the results, look for the "TCP:[SynReTransmit" frame. <BR /> <BR /> If you find these, then investigate firewall and open ports. It is often useful to take a simultaneous trace from the client and the domain controller and check if the packets are arriving at the other end. <BR /> <BR /> 2. Make sure that the target Kerberos names are valid. <BR /> <UL> <BR /> <LI> IP addresses are not valid Kerberos names </LI> <BR /> <LI> Kerberos supports short names and fully qualified domain names. Like CONTOSO or </LI> <BR /> </UL> <BR /> <STRONG> 3. Make sure that service principal names (SPNs) are registered correctly. </STRONG> <BR /> <BR /> For more information on troubleshooting Kerberos see <A href="#" target="_blank"> or </A> <A href="#" target="_blank"> </A> <BR /> 2. Domain password change fails via application code with an INCORRECT/UNEXPECTED Error code when a password which does not meet password complexity is entered. <BR /> <BR /> <BR /> For example, before installing MS16-101, such password change may have returned a status like STATUS_PASSWORD_RESTRICTION. After installing Ms16-101 it returns STATUS_DOWNGRADE_DETECTED causing your application to behave in an expected way or even crash. <BR /> <BR /> Note: In this scenario, password change succeeds when correct new password is entered that complies with the password policy. <BR /> <BR /> <B> </B> <BR /> <BR /> <B> <STRONG> Cause: </STRONG> </B> <BR /> <BR /> This issue is caused by a code defect in ADSI whereby the status returned from Kerberos was not returned to the user by ADSI correctly. <BR /> Here is a more detailed explanation of this one for the geek in you: <BR /> <BR /> <B> Before MS16-101 behavior: </B> <BR /> <BR /> 1. An application calls ChangePassword method from using the ADSI LDAP provider. <BR /> Setting and changing passwords with the ADSI LDAP Provider is documented <A href="#" target="_blank"> here. </A> <BR /> Under the hood this calls Negotiate/Kerberos to change the password using a valid realm name. <BR /> Kerberos returns STATUS_PASSWORD_RESTRICTION or Other failure code. <BR /> <BR /> 2. A 2nd changepassword call is made via NetUserChangePassword API with an intentional realmname as the &lt;dcname&gt; which uses <BR /> Negotiate and will retry Kerberos. Kerberos fails with STATUS_NO_LOGON_SERVERS because a DC name is not a valid realm name. <BR /> <BR /> 3.Negotiate then retries over NTLM which succeeds or returns the same previous failure status. <BR /> <BR /> The password change fails if a bad password was entered and the NTLM error code is returned back to the application. If a valid password was entered, everything works because the 1st change password call passes in a good name and if Kerberos works, the password change operation succeeds and you never enter into step 3. <BR /> <BR /> <B> <STRONG> Post MS16-101 behavior /why it fails with MS16-101 installed: </STRONG> </B> <BR /> <BR /> 1. An application calls ChangePassword method from using the ADSI LDAP provider. This calls Negotiate for the password change with <BR /> a valid realm name. <BR /> Kerberos returns STATUS_PASSWORD_RESTRICTION or Other failure code. <BR /> <BR /> 2. A 2nd ChangePassword call is made via NetUserChangePassword with a &lt;dcname&gt; as realm name which fails over Kerberos with <BR /> STATUS_NO_LOGON_SERVERS which triggers NTLM fallback. <BR /> <BR /> 3. Because NTLM fallback is blocked on MS16-101, Error STATUS_DOWNGRADE_DETECTED is returned to the calling app. <BR /> <BR /> <B> <STRONG> Solution: </STRONG> </B> Easy. Install the October update which will fix this issue. The fix lies in <B> <STRONG> adsmsext.dll </STRONG> </B> included in the October updates. <BR /> <BR /> Again, here are the updates you need to install, Taken from <A href="#" target="_blank"> MS16-101 Security Bulletin </A> : <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> OS </STRONG> </TD> <TD> <B> <STRONG> Fix needed </STRONG> </B> </TD> </TR> <TR> <TD> <B> Vista / W2K8 </B> </TD> <TD> Re-install <A href="#" target="_blank"> 3167679 </A> , re-released 2016.10.11 </TD> </TR> <TR> <TD> <B> Win7 / W2K8 R2 </B> </TD> <TD> Install <A href="#" target="_blank"> 3192391 </A> (security only) <BR /> or <BR /> Install <A href="#" target="_blank"> 3185330 </A> (monthly rollup that includes security fixes) </TD> </TR> <TR> <TD> <B> WS12 </B> </TD> <TD> <A href="#" target="_blank"> 3192393 </A> (security only) <BR /> or <BR /> <A href="#" target="_blank"> 3185332 </A> (monthly rollup that includes security fixes) </TD> </TR> <TR> <TD> <B> Win8.1 / WS12 R2 </B> </TD> <TD> <A href="#" target="_blank"> 3192392 </A> (security only) <BR /> OR <BR /> <A href="#" target="_blank"> 3185331 </A> ((monthly rollup that includes security fixes) </TD> </TR> <TR> <TD> <B> Windows 10 </B> </TD> <TD> <B> For 1511: </B> <A href="#" target="_blank"> 3192441 </A> Cumulative update for Windows 10 Version 1511: October 11, 2016 <BR /> <B> For 1607: </B> <A href="#" target="_blank"> 3194798 </A> Cumulative update for Windows 10 Version 1607 and Windows Server 2016: October 11, 2016 </TD> </TR> </TBODY></TABLE> <BR /> 3.Local user account password change fails via CTRL+ALT+DEL or application code. <BR /> <BR /> <BR /> Installing October updates above should also resolve this. <BR /> <BR /> MS16-101 had a defect where Negotiate did not correctly determine that the password change was local and would try to find a DC using the local machine as the domain name. <BR /> <BR /> This failed and NTLM fallback was no longer allowed post MS16-101. Therefore, the password changes failed with STATUS_DOWNGRADE_DETECTED. <BR /> <BR /> <B> <STRONG> Example: </STRONG> </B> <BR /> <BR /> <B> </B> <BR /> <BR /> One such scenario which I saw where password changes of local user accounts via ctrl+alt+delete failed with the message "The system detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you." Was when you have the following group policy set and you try to change a password of a local account: <BR /> <TABLE> <TBODY><TR> <TD> Policy </TD> <TD> Computer Configuration \ Administrative Templates \ System \ Logon\“Assign a default domain for logon” </TD> </TR> <TR> <TD> Path </TD> <TD> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DefaultLogonDomain </TD> </TR> <TR> <TD> Setting </TD> <TD> DefaultLogonDomain </TD> </TR> <TR> <TD> Data Type </TD> <TD> REG_SZ </TD> </TR> <TR> <TD> Value </TD> <TD> "."&nbsp;&nbsp;&nbsp; (less quotes). The period or "dot" designates the local machine name </TD> </TR> <TR> <TD> Notes </TD> <TD> </TD> </TR> </TBODY></TABLE> <BR /> <B> <STRONG> Cause: </STRONG> </B> In this case, post MS16-101 Negotiate incorrectly determined that the account is not local and tried to discover a DC using \\&lt;machinename&gt; as the domain and failed. This caused the password change to fail with the STATUS_DOWNGRADE_DETECTED error. <BR /> <BR /> <B> </B> <BR /> <BR /> <B> <STRONG> Solution: </STRONG> </B> Install October fixes listed in the table at the top of this post. <B> </B> <BR /> 4.Passwords for disabled and locked out user accounts cannot be changed using Negotiate method. <BR /> <BR /> <BR /> MS16-101 purposely disabled changing the password of locked-out or disabled user account passwords via Negotiate by design. <BR /> <BR /> Important: Password Reset is not affected by MS16-101 at all in any scenario. Only password change. Therefore, any application which is doing a password Reset will be unaffected by Ms16-101. <BR /> <BR /> Another important thing to note is that MS16-101 only affects applications using Negotiate. Therefore, it is possible to change locked-out and disabled account password using other method’s such as LDAPs. <BR /> <BR /> For example, the PowerShell cmdlet <B> <I> Set-ADAccountPassword </I> </B> will continue to work for locked out and disabled account password changes as it does not use Negotiate. <I> </I> <BR /> 5. Troubleshooting Domain password change failure via application code when a good password is entered. <BR /> <BR /> <BR /> This is one of the most difficult scenarios to identify and troubleshoot. And therefore I have provided a more detailed example here including sample code, the cause and solution. <BR /> <BR /> <B> In summary, </B> the solution for these cases is almost always to correct the application code which maybe passing in an invalid domain name such that Kerberos fails with STATUS_NO_LOGON_SERVERS. <BR /> <BR /> <B> Scenario: </B> <BR /> <BR /> An application is using <B> system.directoryservices.accountmanagement </B> namespace to change a users password. <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> After installing Ms16-101 password changes fail with STATUS_DOWNGRADE_DETECTED. Example .NET failing code snippet using PowerShell which worked before MS16-101: <BR /> <BR /> &lt;snip&gt; <BR /> <BR /> Add-Type -AssemblyName System.DirectoryServices.AccountManagement <BR /> $ct = [System.DirectoryServices.AccountManagement.ContextType]::Domain <BR /> $ctoptions = [System.DirectoryServices.AccountManagement.ContextOptions]::SimpleBind -bor [System.DirectoryServices.AccountManagement.ContextOptions]::ServerBind <BR /> $pc = New-Object System.DirectoryServices.AccountManagement.PrincipalContext($ct, "","OU=Accounts,DC=Contoso,DC=Com", ,$ctoptions) <BR /> $idType = [System.DirectoryServices.AccountManagement.IdentityType]::SamAccountName <BR /> $up = [System.DirectoryServices.AccountManagement.UserPrincipal]::FindByIdentity($pc,$idType, "TestUser") <BR /> $up.ChangePassword("oldPassword!123", “newPassword!123”) <BR /> <BR /> &lt;snip&gt; <BR /> <BR /> <STRONG> <STRONG> Data Analysis </STRONG> <BR /> </STRONG> <B> </B> <BR /> <BR /> There are 2 possibilities here: <BR /> <STRONG> (a) </STRONG> The application code is passing an incorrect domain name parameter causing Kerberos password change to fail to locate a DC. <BR /> <STRONG> (b) </STRONG> Application code is good and Kerberos password change fails for other reason like blocked port or DNS issue or missing SPN. <BR /> <BR /> Let’s start with <STRONG> (a) </STRONG> The application code is passing an incorrect domain name/parameter causing Kerberos password change to fail to locate a DC. <BR /> (a) Data Analysis Walkthrough Example based on a real case: <BR /> <STRONG> 1. Start with Lsass.log (SPNEGO trace) </STRONG> <BR /> <BR /> If you are troubleshooting a password change failure after MS16-101 look for the following text in <B> Lsass.log </B> to indicate that Kerberos failed and NTLM fallback was forbidden by Ms16-101: <BR /> <BR /> <B> Failing Example: </B> <BR /> <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-WAPI: [11b0.1014] Dispatching API (Message 0) <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Trace: [11b0] LpcDispatch: dispatching ChangeAccountPassword (1a) <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Trace: [11b0] LpcChangeAccountPassword() <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Helpers: [11b0] LsapCopyFromClient(0000005EAB78C9D8, 000000DA664CE5E0, 16) = 0 <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Neg: NegChangeAccountPassword: <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Neg: NegChangeAccountPassword, attempting: NegoExtender <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Neg: NegChangeAccountPassword, attempting: Kerberos <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Warning: Failed to change password for account Test: 0xc000005e <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Neg: NegChangeAccountPassword, attempting: NTLM <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Neg: NegChangeAccountPassword, NTLM failed: not allowed to change domain passwords <BR /> [ 9/13 10:23:36] 492.2448&gt; SPM-Neg: NegChangeAccountPassword, returning: 0xc0000388 <BR /> <UL> <BR /> <LI> 0xc000005E is STATUS_NO_LOGON_SERVERS <BR /> 0xc0000388 is STATUS_DOWNGRADE_DETECTED </LI> <BR /> </UL> <BR /> If you see this, it means Kerberos failed to locate a Domain Controller in the domain and fallback to NTLM is not allowed by Ms16-101. Next you should look at the <STRONG> Netlogon.log </STRONG> and the <STRONG> Network trace </STRONG> to understand why. <BR /> <BR /> <STRONG> 2. Network trace </STRONG> <BR /> <BR /> Look at the network trace and filter the traffic based on the client IP, DNS and any authentication related traffic. <BR /> You may see the client is requesting a Kerberos ticket using an invalid SPN like: <BR /> <BR /> <B> </B> <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Source </STRONG> </TD> <TD> <STRONG> Destination </STRONG> </TD> <TD> <STRONG> Description </STRONG> </TD> </TR> <TR> <TD> Client </TD> <TD> DC1 </TD> <TD> KerberosV5:TGS Request Realm: CONTOSO.COM Sname: ldap/;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; {TCP:45, IPv4:7} </TD> </TR> <TR> <TD> DC1 </TD> <TD> Client </TD> <TD> KerberosV5:KRB_ERROR&nbsp; - KDC_ERR_S_PRINCIPAL_UNKNOWN (7)&nbsp; {TCP:45, IPv4:7} </TD> </TR> </TBODY></TABLE> <BR /> <B> </B> <BR /> <BR /> So here the client tried to get a ticket for this ldap\ SPN and failed with KDC_ERR_S_PRINCIPAL_UNKNOWN because this SPN is not registered anywhere. <BR /> <UL> <BR /> <LI> This is expected. A valid LDAP SPN is example like ldap\ </LI> <BR /> </UL> <BR /> Next let’s check the Netlogon.log <BR /> <BR /> <STRONG> 3. Netlogon.log: </STRONG> <BR /> <BR /> Open the log with any text editor (I like good old Notepad.exe) and check the following: <BR /> <UL> <BR /> <LI> <B> <STRONG> Is a valid domain name being passed to DC locator? </STRONG> </B> </LI> <BR /> </UL> <BR /> Invalid names such as <A href="#" target="_blank"> \\ </A> or IP address <A href="#" target="_blank"> \\x.y.x.w </A> will cause dclocator to fail and thus Kerberos password change to return STATUS_NO_LOGON_SERVERS. Once that happens NTLM fall back is not allowed and you get a failed password change. <BR /> <BR /> If you find this issue examine the application code and make necessary changes to <B> ensure correct domain name format is being passed to the ChangePassword API </B> that is being used. <BR /> <BR /> <B> Example of failure in Netlogon.log: </B> <BR /> <BR /> [MISC] [PID] DsGetDcName function called: client PID=1234, Dom:\\ Acct:(null) Flags: IP KDC <BR /> [MISC] [PID] DsGetDcName function returns 1212 (client PID=1234): Dom:\\ Acct:(null) Flags: IP KDC <BR /> <BR /> \\ is not a valid domain name. ( is a valid domain name) <BR /> <BR /> This Error translates to: <BR /> <TABLE> <TBODY><TR> <TD> 0x4bc </TD> <TD> 1212 </TD> <TD> ERROR_INVALID_DOMAINNAME </TD> <TD> The format of the specified domain name is invalid. </TD> <TD> winerror.h </TD> </TR> </TBODY></TABLE> <BR /> So what happened here? <BR /> The application code passed an invalid TargetName to kerberos. It used the domain name <B> as a server name </B> and so we see the SPN of LDAP\ <BR /> <BR /> The client tried to get a ticket for this SPN and failed with KDC_ERR_S_PRINCIPAL_UNKNOWN because this SPN is not registered anywhere. As Noted: this is expected. A valid LDAP SPN is example like ldap\ <B> DC1 </B> <BR /> <BR /> The application code then tried the password change again and passed in <A href="#" target="_blank"> \\ </A> as a domain name for the password change. Anything beginning with \\ as domain name is not valid. IP address is not valid. So DCLOCATOR will fail to locate a DC when given this domain name. We can see this in the Netlogon.log and the Network trace. <BR /> <BR /> <B> <STRONG> Conclusion and Solution </STRONG> </B> <B> </B> <BR /> <BR /> If the domain name is invalid here, examine the code snippet which is doing the password change to understand why the wrong name is passed in. <BR /> <BR /> <B> The fix in these cases will be to change the code to ensure a valid domain name is passed to Kerberos to allow the password change to successfully happen over Kerberos and not NTLM. NTLM is not secure. If Kerberos is possible, it should be the protocol used. </B> <B> </B> <BR /> <BR /> <B> <STRONG> SOLUTION </STRONG> </B> <BR /> <BR /> The solution here was to remove "ContextOptions. <B> ServerBind </B> |&nbsp; ContextOptions.SimpleBind " and allow the code to use the default (Negotiate). Note, because we were using a domain context but ServerBind this caused the issue. Negotiate with Domain context is the option that works and is successfully able to use kerberos. <BR /> <BR /> Working code: <BR /> <BR /> &lt;snip&gt; <BR /> Add-Type -AssemblyName System.DirectoryServices.AccountManagement <BR /> $ct = [System.DirectoryServices.AccountManagement.ContextType]::Domain <BR /> $pc = New-Object System.DirectoryServices.AccountManagement.PrincipalContext($ct, "","OU=Accounts,DC=Contoso,DC=Com") <BR /> $idType = [System.DirectoryServices.AccountManagement.IdentityType]::SamAccountName <BR /> $up = [System.DirectoryServices.AccountManagement.UserPrincipal]::FindByIdentity($pc,$idType, "TestUser") <BR /> $up.ChangePassword("oldPassword!123", “newPassword!123”) <BR /> &lt;snip&gt; <BR /> Why does this code work before MS16-101 and fail after? <BR /> <BR /> <BR /> <B> </B> <B> ContextOptions </B> are documented here: <A href="#" target="_blank"> </A> <BR /> <BR /> Specifically: <EM> “This parameter specifies the options that are used for binding to the <B> server. </B> The application can set multiple options that are linked with a bitwise OR operation. “ </EM> <BR /> <BR /> Passing in a domain name such as <B> </B> with the ContextOptions ServerBind or SimpleBind causes the client to attempt to use an SPN like <B> ldap\ </B> because it expects the name which is passed in to be a ServerName. <BR /> <BR /> This is not a valid SPN and does not exist, therefore this will fail and as a result Kerberos will fail with STATUS_NO_LOGON_SERVERS. <BR /> Before MS16-101, in this scenario, the Negotiate package would fall back to NTLM, attempt the password change using NTLM and succeed. <BR /> Post MS16-101 this fall back is not allowed and Kerberos is enforced. <BR /> (b) If Application Code is good but Kerberos fails to locate a DC for other reason <BR /> <BR /> <BR /> If you see a correct domain name and SPN’s in the above logs, then the issue is that kerberos fails for some other reason such as blocked TCP ports. In this case revert to Scenario 1 to troubleshoot why Kerberos failed to locate a Domain Controller. <BR /> <BR /> There is a chance that you may also have both (a) and (b). Traces and logs are the best tools to identify. <BR /> Scenario6: After you install MS 16-101 update, you may encounter 0xC0000022 NTLM authentication errors. <BR /> I will not go into detail of this scenario as it is well described in the KB article <A href="#" target="_blank"> KB3195799 </A> NTLM authentication fails with 0xC0000022 error for Windows Server 2012, Windows 8.1, and Windows Server 2012 R2 after update is applied. <BR /> <BR /> That’s all for today! I hope you find this useful. I will update this post if any new information arises. <BR /> <BR /> <B> Linda Taylor | Senior Escalation Engineer | Windows Directory Services <BR /> </B> (A well established member of the content police.) </BODY></HTML> Fri, 05 Apr 2019 03:21:04 GMT Lindakup 2019-04-05T03:21:04Z Access-Based Enumeration (ABE) Troubleshooting (part 2 of 2) <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Sep 21, 2016 </STRONG> <BR /> <P> Hello everyone! <A href="#" target="_blank"> Hubert </A> from the German Networking Team here again with part two of my little Blog Post Series about <B> Access-Based Enumeration (ABE). </B> In the <A href="#" target="_blank"> first part </A> I covered some of the basic concepts of ABE. In this second part I will focus on monitoring and troubleshooting Access-based enumeration. <BR /> We will begin with a quick overview of Windows Explorer’s directory change notification mechanism (Change Notify), and how that mechanism can lead to performance issues before moving on to monitoring your environment for performance issues. <B> <BR /> <STRONG> </STRONG> </B> </P> Change Notify and its impact on DFSN servers with ABE <P> Let’s say you are viewing the contents of a network share while a file or folder is added to the share remotely by someone else. Your view of this share will be updated automatically with the new contents of the share without you having to manually refresh (press F5) your view. <BR /> Change Notify is the mechanism that makes this work in all SMB Protocols (1,2 and 3). <BR /> The way it works is quite simple: </P> <OL> <LI> The client sends a <A href="#" target="_blank"> CHANGE_NOTIFY </A> request to the server indicating the directory or file it is interested in. Windows Explorer (as an application on the client) does this by default for the directory that is currently in focus. </LI> <LI> Once there is a change to the file or directory in question, the server will respond with a CHANGE_NOTIFY Response, indicating that a change happened. </LI> <LI> This causes the client to send a QUERY_DIRECTORY request (in case it was a directory or DFS Namespace) to the server to find out what has changed. <BR /> <BR /> QUERY_DIRECTORY is the thing we discussed in the first post that causes ABE filter calculations. Recall that it’s these filter calculation that result in CPU load and client-side delays. <BR /> Let’s look at a common scenario: </LI> <LI> During login, your users get a mapped drive pointing at a share in a DFS Namespace. </LI> <LI> This mapped drive causes the clients to connect to your DFSN Servers </LI> <LI> The client sends a Change Notification (even if the user hasn’t tried to open the mapped drive in Windows Explorer yet) for the DFS Root. <BR /> <BR /> Nothing more happens until there is a change on the server-side. Administrative work, such as adding and removing links, typically happens during business hours, whenever the administrators find the time, or the script that does it, runs. <BR /> <BR /> Back to our scenario. Let’s have a server-side change to illustrate what happens next: </LI> <LI> We add a Link to the DFS Namespace. </LI> <LI> Once the DFSN Server picks up the new link in the namespace from Active directory, it will create the corresponding reparse point in its local file system. <BR /> If you do not use <A href="#" target="_blank"> Root Scalability Mode </A> (RSM) this will happen almost at the same time on all of the DFS Servers in that namespace. With RSM the changes will usually be applied by the different DFS servers over the next hour (or whatever your SyncInterval is set to). </LI> <LI> These changes trigger CHANGE_NOTIFY responses to be sent out to any client that indicated interest in changes to the DFS Root on that server. This usually applies to hundreds of clients per DFS server. </LI> <LI> This causes hundreds of Clients to send QUERY_DIRECTORY requests simultaneously. </LI> </OL> <P> What happens next strongly depends on the size of your namespace (larger namespaces lead to longer duration per ABE calculation) and the number of Clients (aka Requests) per CPU of the DFSN Server (remember the calculation from the first part?) </P> <P> As your Server does not have hundreds of CPUs there will definitely be some backlog. The numbers above decide how big this backlog will be, and how long it takes for the server to work its way back to normal. Keep in mind that while pedaling out of the backlog situation, your server still has to answer other, ongoing requests that are unrelated to our Change Notify Event. <BR /> Suffice it to say, this backlog and the CPU demand associated with it can also have negative impact to other jobs.&nbsp; For example, if you use this DFSN server to make a bunch of changes to your namespace, these changes will appear to take forever, simply because the executing server is starved of CPU Cycles. The same holds true if you run other workloads on the same server or want to RDP into the box. <BR /> <B> </B> </P> <P> <B> <STRONG> <STRONG> <STRONG> So! What can you do about it? </STRONG> <BR /> </STRONG> </STRONG> </B> As is common with an overloaded server, there are a few different approaches you could take: </P> <UL> <LI> Distribute the load across more servers (and CPU cores) </LI> <LI> Make changes outside of business hours </LI> <LI> Disable Change Notify in Windows Explorer </LI> </UL> <TABLE> <TBODY><TR> <TD> <P> Approach </P> </TD> <TD> <P> Method </P> </TD> </TR> <TR> <TD> <P> Distribute the load / scale up </P> </TD> <TD> <P> An expensive way to handle the excessive load is to throw more servers/CPU cores into the DFS infrastructure. In theory, you could increase the number of Servers and the number of CPUs to a level where you can handle such peak loads without any issues, but that can be a very expensive approach. </P> </TD> </TR> <TR> <TD> <P> Make changes outside business hours </P> </TD> <TD> <P> Depending on your organizations structure, your business needs, SLAs and other requirements, you could simply make planned administrative changes to your Namespaces outside the main business hours, when there are less clients connected to your DFSN Servers. </P> </TD> </TR> <TR> <TD> <P> Disable Change Notify in Windows Explorer </P> </TD> <TD> <P> You can set: <BR /> <I> NoRemoteChangeNotify </I> <BR /> <I> NoRemoteRecursiveEvents <BR /> </I> See <A href="#" target="_blank"> </A> <BR /> to prevent Windows Explorer from sending Change Notification Requests. <BR /> This is however a client-side setting that disables this functionality (change notify) not just for DFS shares but also for any fileserver it is working with. Thus you have to actively press F5 to see changes to a folder or a share in your Windows Explorer. This might or might not be a big deal for your users. </P> </TD> </TR> </TBODY></TABLE> Monitoring ABE <P> As you may have realized by now, ABE is not a fire and forget technology---it needs constant oversight and occasional tuning. We’ve mainly discussed the design and “tuning” aspect so far. Let’s look into the monitoring aspect. </P> Using Task Manager / Process Explorer <P> This is a bit tricky, unfortunately, as any load caused by ABE shows up in Task Manager inside the <B> System </B> process (as do many other things on the server). In order to correlate high CPU utilization in the System process to ABE load, you need to use a tool such as <A href="#" target="_blank"> Process Explorer </A> and <A href="#" target="_blank"> configure it </A> to use <A href="#" target="_blank"> public symbols </A> . With this configured properly, you can drill deeper inside the System Process and see the different threads and the component names. We need to note, that ABE and the Fileserver both use functions in srv.sys and srv2.sys. So strictly speaking it’s not possible to differentiate between them just by the component names. However, if you are troubleshooting a performance problem on an ABE-enabled server where most of the threads in the System process are sitting in functions from srv.sys and srv2.sys, then it’s very likely due to expensive ABE filter calculations. This is, aside from disabling ABE, the best approach to reliably prove your problem to be caused by ABE. </P> Using Network trace analysis <P> Looking at CPU utilization shows us the server-side problem. We must use other measures to determine what the client-side impact is, one approach is to take a network trace and analyze the SMB/SMB2 Service Response times. You may however end up having to capture the trace on a mirrored switch port. To make analysis of this a bit easier, <A href="#" target="_blank"> Message Analyzer </A> has an SMB Service Performance chart you can use. </P> <P> <IMG src="" /> </P> <P> You get there by using a New Viewer, like below. <BR /> </P> <P> <IMG src="" /> </P> <P> Wireshark also has a feature that provides you with statistics under Statistics -&gt; Service Response Times -&gt; SMB2. Ignore the values for ChangeNotify (its normal that they are several seconds or even minutes). All other response times translate into delays for the clients. If you see values over a second, you can consider your files service not only to be slow but outright broken. <BR /> While you have that trace in front of you, you can also look for SMB/TCP Connections that are terminated abnormally by the Client as the server failed to respond to the SMB Requests in time. If you have any of those, then you have clients unable to connect to your file service, likely throwing error messages. </P> Using Performance Monitor <P> If your server is running Windows Server 2012 or newer, the following performance counters are available: </P> <TABLE> <TBODY><TR> <TD> <P> <B> <STRONG> Object </STRONG> </B> <B> </B> </P> </TD> <TD> <P> <B> <STRONG> Counter </STRONG> </B> <B> </B> </P> </TD> <TD> <P> <B> <STRONG> Instance </STRONG> </B> <B> </B> </P> </TD> </TR> <TR> <TD> <P> SMB Server Shares </P> </TD> <TD> <P> Avg. sec /Data Request </P> </TD> <TD> <P> &lt;Share that has ABE Enabled&gt; </P> </TD> </TR> <TR> <TD> <P> SMB Server Shares </P> </TD> <TD> <P> Avg. sec/Read </P> </TD> <TD> <P> ‘’ </P> </TD> </TR> <TR> <TD> <P> SMB Server Shares </P> </TD> <TD> <P> Avg. sec/Request </P> </TD> <TD> <P> ‘’ </P> </TD> </TR> <TR> <TD> <P> SMB Server Shares </P> </TD> <TD> <P> Avg. sec/Write </P> </TD> <TD> <P> ‘’ </P> </TD> </TR> <TR> <TD> <P> SMB Server Shares </P> </TD> <TD> <P> Avg. Data Queue Length </P> </TD> <TD> <P> ‘’ </P> </TD> </TR> <TR> <TD> <P> SMB Server Shares </P> </TD> <TD> <P> Avg. Read Queue Length </P> </TD> <TD> <P> ‘’ </P> </TD> </TR> <TR> <TD> <P> SMB Server Shares </P> </TD> <TD> <P> Avg. Write Queue Length </P> </TD> <TD> <P> ‘’ </P> </TD> </TR> <TR> <TD> <P> SMB Server Shares </P> </TD> <TD> <P> Current Pending Requests </P> </TD> <TD> <P> ‘’ </P> </TD> </TR> </TBODY></TABLE> <P> <IMG src="" /> </P> <P> Most noticeable here is <B> <STRONG> Avg. sec/Request </STRONG> </B> counter as this contains the response time to the QUERY_DIRECTORY requests (Wireshark displays them as Find Requests). The other values will suffer from a lack of CPU Cycles in varying ways but all indicate delays for the clients. As mentioned in the first part: We expect single digit millisecond response times from non-ABE Fileservers that are performing well. For ABE-enabled Servers (more precisely Shares) the values for QUERY_DIRECTORY / Find Requests will always be higher due to the inevitable length of the ABE Calculation. </P> <P> When you reached a state where all the other SMB Requests aside of the QUERY_DIRECTORY are constantly responded to in less than 10ms and the QUERY_DIRECTORY constantly in less than 50ms you have a very good performing Server with ABE. </P> Other Symptoms <P> There are other symptoms of ABE problems that you may observe, however, none of them on their own is very telling, without the information from the points above. <B> </B> </P> <P> At a first glance a high CPU Utilization and a high Processor Queue lengths are indicators of an ABE problem, however they are also indicators of other CPU-related performance issues. Not to mention there are cases where you encounter ABE performance problems without saturating all your CPUs. </P> <P> The Server Work Queues\Active Threads (NonBlocking) will usually raise to their maximum allowed limit ( <A href="#" target="_blank"> MaxThreadsPerQueue </A> ) as well as the Server Work Queues\Queue Length increasing. Both indicate that the Fileserver is busy, but on their own don’t tell you how bad the situation is. However, there are scenarios where the File server will not use up all Worker Threads allowed due to a bottleneck somewhere else such as in the Disk Subsystem or CPU Cycles available to it. </P> <P> See the following should you choose to setup long-term monitoring (which you should) in order to get some trends: </P> <P> <I> Number of Objects per Directory or Number of DFS Links </I> <BR /> <I> Number of Peak User requests (Performance Counter: Requests / sec.) </I> <BR /> <I> Peak Server Response time to Find Requests or Performance Counter: Avg. sec/Request </I> <BR /> <I> Peak CPU Utilization and Peak Processor Queue length. </I> <I> </I> </P> <P> If you collect those values every day (or a shorter interval), you can get a pretty good picture how much head-room you have left with your servers at the moment and if there are trends that you need to react to. </P> <P> Feel free to add more information to your monitoring to get a better picture of the situation.&nbsp; For example: gather information on how many DFS servers were active at any given day for a certain site, so you can explain if unusual high numbers of user requests on the other servers come from a server downtime. </P> ABELevel <P> Some of you might have heard about the registry key <A href="#" target="_blank"> ABELevel </A> . The ABELevel value specifies the maximum level of the folders on which the ABE feature is enabled. While the title of the KB sounds very promising, and the hotfix is presented as a “Resolution”, the hotfix and registry value have very little practical application.&nbsp; Here’s why: <BR /> <STRONG> ABELevel </STRONG> is a system-wide setting and does not differentiate between different shares on the same server. If you host several shares, you are unable to filter to different depths as the setting forces you to go for the deepest folder hierarchy. This results in unnecessary filter calculations for shares. </P> <P> Usually the widest directories are on the upper levels---those levels that you need to filter.&nbsp; Disabling the filtering for the lower level directories doesn’t yield much of a performance gain, as those small directories don’t have much impact on server performance, while the big top-level directories do.&nbsp; Furthermore, the registry value doesn’t make any sense for DFS Namespaces as you have only one folder level there and you should avoid filtering on your fileservers anyway. </P> While we are talking about Updates <P> Here is one that you should install: <BR /> High CPU usage and performance issues occur when access-based enumeration is enabled in Windows 8.1 or Windows 7 - <A href="#" target="_blank"> </A> <BR /> </P> <P> Furthermore you should definitely review the Lists of recommended updates for your server components: <BR /> DFS <BR /> <A href="#" target="_blank"> </A> (2008 / 2008 R2) <BR /> <A href="#" target="_blank"> </A> (2012 / 2012 R2) <BR /> </P> <P> File Services <BR /> <A href="#" target="_blank"> </A> (2008 / 2008 R2) <BR /> <A href="#" target="_blank"> </A> (2012 / 2012 R2) <BR /> <BR /> Well then, this concludes this small (my first) blog series. <BR /> I hope you found reading it worthwhile and got some input for your infrastructures out there. </P> <P> With best regards <BR /> Hubert </P> </BODY></HTML> Fri, 05 Apr 2019 03:20:47 GMT JustinTurner 2019-04-05T03:20:47Z Access-Based Enumeration (ABE) Concepts (part 1 of 2) <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Sep 01, 2016 </STRONG> <BR /> <P> Hello everyone, <A href="#" target="_blank"> Hubert </A> from the German Networking Team here.&nbsp; Today I want to revisit a topic that I wrote about in 2009: <B> Access-Based Enumeration (ABE) </B> <B> </B> </P> <BR /> <P> This is the first part of a 2-part Series. This first part will explain some conceptual things around ABE.&nbsp; The second part will focus on diagnostic and troubleshooting of ABE related problems.&nbsp; The second post is <A href="#" target="_blank"> here </A> . </P> <BR /> <P> Access-Based Enumeration has existed since Windows Server 2003 SP1 and has not change in any significant form since my <A href="#" target="_blank"> Blog post </A> in 2009. However, what has significantly changed is its popularity. </P> <BR /> <P> With its integration into V2 (2008 Mode) DFS Namespaces and the increasing demand for data privacy, it became a tool of choice for many architects. However, the same strict limitations and performance impact it had in Windows Server 2003 still apply today. With this post, I hope to shed some more light here as these limitations and the performance impact are either unknown or often ignored. Read on to gain a little insight and background on ABE so that you: </P> <BR /> <OL> <BR /> <LI> Understand its capabilities and limitations </LI> <BR /> <LI> Gain the background knowledge needed for my next post on how to troubleshoot ABE </LI> <BR /> </OL> <BR /> <P> Two things to keep in mind: </P> <BR /> <UL> <BR /> <LI> ABE is not a security feature (it’s more of a convenience feature) </LI> <BR /> <LI> There is no guarantee that ABE will perform well under all circumstances. If performance issues come up in your deployment, disabling ABE is a valid solution. </LI> <BR /> </UL> <BR /> <P> So without any further ado let’s jump right in: </P> <BR /> <P> <B> <STRONG> What is ABE and what can I do with it? <BR /> <BR /> </STRONG> </B> From the <A href="#" target="_blank"> TechNet topic </A> : </P> <BR /> <BLOCKQUOTE> <P> “Access-based enumeration displays only the files and folders that a user has permissions to access. If a user does not have Read (or equivalent) permissions for a folder, Windows hides the folder from the user’s view. This feature is active only when viewing files and folders in a shared folder; it is not active when viewing files and folders in the local file system.” </P> </BLOCKQUOTE> <BR /> <P> Note that ABE has to check the user’s permissions at the time of enumeration and filter out files and folders they don’t have Read permissions to. Also note that this filtering only applies if the user is attempting to access the share via SMB versus simply browsing the same folder structure in the local file system. <BR /> <BR /> For example, let’s assume you have an ABE enabled file server share with 500 files and folders, but a certain user only has read permissions to 5 of those folders. The user is only able to view 5 folders when accessing the share over the network. If the user logons to this server and browses the local file system, they will see all of the files and folders. </P> <BR /> <P> In addition to file server shares, ABE can also be used to filter the links in DFS Namespaces. <BR /> <BR /> With V2 Namespaces DFSN got the capability to store permissions for each DFSN link, and apply those permissions to the local file system of each DFSN Server. <BR /> <BR /> Those NTFS permissions are then used by ABE to filter directory enumerations against the DFSN root share thus removing DFSN links from the results sent to the client. </P> <BR /> <P> Therefore, ABE can be used to either hide sensitive information in the link/folder names, or to increase usability by hiding hundreds of links/folders the user does not have access to. </P> <BR /> <P> <B> <STRONG> How does it work? <BR /> <BR /> </STRONG> </B> The filtering happens on the file server at the time of the request. <BR /> <BR /> Any Object (File / Folder / Shortcut / Reparse Point / etc.) where the user has less than generic read permissions is omitted in the response by the server. <BR /> <BR /> Generic Read means: </P> <BR /> <UL> <BR /> <LI> List Folder / Read Data </LI> <BR /> <LI> Read Attributes </LI> <BR /> <LI> Read Extended Attributes </LI> <BR /> <LI> Read Permissions </LI> <BR /> </UL> <BR /> <P> If you take any of these permissions away, ABE will hide the object. <BR /> <BR /> So you could create a scenario (i.e. remove the Read Permission permission) where the object is hidden from the user, but he/she could still open/read the file or folder if the user knows its name. </P> <BR /> <P> That brings us to the next important conceptual point we need to understand: </P> <BR /> <P> <B> <STRONG> ABE does not do access control. </STRONG> </B> <BR /> <BR /> It only filters the response to a Directory Enumeration. The access control is still done through NTFS. <BR /> <BR /> Aside from that ABE only works when the access happens through the Server Service (aka the Fileserver). Any access locally to the file system is not affected by ABE. Restated: </P> <BR /> <BLOCKQUOTE> <P> “Access-based enumeration does not prevent users from obtaining a referral to a folder target if they already know the DFS path of the folder with targets. Permissions set using Windows Explorer or the Icacls command on namespace roots or folders without targets control whether users can access the DFS folder or namespace root. However, they do not prevent users from directly accessing a folder with targets. Only the share permissions or the NTFS file system permissions of the shared folder itself can prevent users from accessing folder targets.” Recall what I said earlier, “ABE is not a security feature”. <A href="#" target="_blank"> TechNet </A> </P> </BLOCKQUOTE> <BR /> <P> <B> <STRONG> ABE does not do any caching. </STRONG> </B> <BR /> <BR /> Every requests causes a filter calculation. There is no cache. ABE will repeat the same exact work for identical directory enumerations by the same user. </P> <BR /> <P> <B> <STRONG> ABE cannot predict the permissions or the result. </STRONG> </B> <BR /> <BR /> It has to do the calculations for each object in every level of your folder hierarchy every time it is accessed. <BR /> <BR /> If you use inheritance on the folder structure, a user will have the same permission and thus the same filter result from ABE through the entire folder structure. Still ABE as to calculate this result, consuming CPU Cycles in the process. <BR /> <BR /> If you enable ABE on such a folder structure you are just wasting CPU cycles without any gain. </P> <BR /> <P> With those basics out of the way, let’s dive into the mechanics behind the scenes: </P> <BR /> <P> <B> <STRONG> How the filtering calculation works </STRONG> </B> <B> </B> </P> <BR /> <OL> <BR /> <LI> When a QUERY_DIRECTORY request ( <A href="#" target="_blank"> </A> ) or its SMB1 equivalent arrives at the server, the server will get a list of objects within that directory from the filesystem. </LI> <BR /> <LI> With ABE enabled, this list is not immediately sent out to the client, but instead passed over to the ABE for processing. </LI> <BR /> <LI> ABE will iterate through EVERY object of this list and compare the permission of the user with the objects ACL. </LI> <BR /> <LI> The objects where the user does not have generic read access are removed from the list. </LI> <BR /> <LI> After ABE has completed its processing, the client receives the filtered list. </LI> <BR /> </OL> <BR /> <P> This yields two effects: </P> <BR /> <UL> <BR /> <LI> This comparison is an active operation and thus consumes CPU Cycles. </LI> <BR /> <LI> This comparison takes time, and this time is passed down to the User as the results will only be sent, when the comparisons for the entire directory are completed. </LI> <BR /> </UL> <BR /> <P> <B> <STRONG> This brings us directly to the core point of this Blog: <BR /> <BR /> </STRONG> </B> In order to successfully use ABE in your environment you have to manage both effects. <BR /> <BR /> If you don’t, ABE can cause a wide spread outage of your File services. </P> <BR /> <P> <B> <STRONG> The first effect can cause a complete saturation of your CPUs (all cores at 100%). </STRONG> </B> <BR /> <BR /> This does not only increase the response times of the Fileserver to its clients to a magnitude where the Server is not accepting any new connections or the clients kill their connection after not getting a response from the server for several minutes, but it can also prevent you from establishing a remote desktop connection to the server to make any changes (like disabling ABE for instance). </P> <BR /> <P> <B> <STRONG> The second effect can increase the response times of your fileserver (even if its otherwise Idle) to a magnitude that is not accepted by the Users anymore. </STRONG> </B> <BR /> <BR /> The comparison for a single directory enumeration by a single user can keep one CPU in your server busy for quite some time, thus making it more likely for new incoming requests to overlap with already running ABE calculations. This eventually results in a Backlog adding further to the delays experienced by your clients. <BR /> <BR /> <B> </B> </P> <BR /> <P> <B> To illustrate this let’s roll some numbers: </B> <BR /> <BR /> A little disclaimer: <BR /> <BR /> <I> The following calculation is what I’ve seen, your results may differ as there are many moving pieces in play here. In other words, your mileage may vary. That aside, the numbers seen here are not entirely off but stem from real production environments. </I> <I> Performance of Disk and CPU and other workloads play into these numbers as well. </I> <BR /> <BR /> <I> Thus the calculation and numbers are for illustration purposes only. Don’t use it to calculate your server’s performance capabilities. </I> </P> <BR /> <P> Let’s assume you have a DFS Namespace with 10,000 links that is hosted on DFS Servers that have 4 CPUs with 3.5 GHz (also assuming RSS is configured correctly and all 4 CPUs are used by the File service: <A href="#" target="_blank"> </A> ). <BR /> <BR /> We usually expect single digit millisecond response times measured at the fileserver to achieve good performance (network latency obviously adds to the numbers seen on the client). <BR /> <BR /> In our scenario above (10,000 Links, ABE, 3.5 Ghz CPU) it is not unseen that a single enumeration of the namespace would take 500ms. </P> <BR /> <TABLE> <TBODY><TR> <TD> CPU cores and speed </TD> <TD> DFS Namespace Links </TD> <TD> RSS configured per recommendations </TD> <TD> ABE enabled? </TD> <TD> Response time </TD> </TR> <TR> <TD> 4 @ 3.5 GHz </TD> <TD> 10,000 </TD> <TD> Yes </TD> <TD> No </TD> <TD> &lt;10ms </TD> </TR> <TR> <TD> 4 @ 3.5 GHz </TD> <TD> 10,000 </TD> <TD> Yes </TD> <TD> <B> Yes </B> <B> </B> </TD> <TD> <B> 300 – 500 ms </B> </TD> </TR> </TBODY></TABLE> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> </STRONG> </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <P> That means a single CPU can handle up to 2 Directory Enumerations per Second. Multiplied by 4 CPUs the server can handle 8 User Requests per Second. Any more than those 8 requests and we push the Server into a backlog. <BR /> <BR /> Backlog in this case means new requests are stuck in the Processor Queue behind other requests, therefore multiplying the wait time. <BR /> <BR /> This can reach dimensions where the client (and the user) is <B> <STRONG> waiting for minutes </STRONG> </B> and the client eventually decides to kill the TCP connection, and in case of DFSN, fail over to another server. </P> <BR /> <P> Anyone remotely familiar with Fileserver Scalability probably instantly recognizes how bad and frightening those numbers are.&nbsp; Please keep in mind, that not every request sent to the server is a QUERY_DIRECTORY request, and all other requests such as Write, Read, Open, Close etc. do not cause an ABE calculation (however they suffer from an ABE-induced lack of CPU resources in the same way). <BR /> <BR /> Furthermore, the Windows File Service Client caches the directory enumeration results if SMB2 or SMB3 is used ( <A href="#" target="_blank"> </A> ). <BR /> <BR /> There is no such Cache for SMB1. Thus SMB1 Clients will send more Directory Enumeration Requests than SMB2 or SMB3 Clients (particularly if you keep the F5 key pressed). <BR /> <BR /> It should now be obvious that you should use SMB2/3 versus SMB1 and ensure you leave the caches enabled if you use ABE on your servers. </P> <BR /> <P> As you might have realized by now, there is no easy or reliable way to predict the CPU demand of ABE. If you are developing a completely new environment you usually cannot forecast the proportion of QUERY_DIRECTORY requests in relation to the other requests or the frequency of the same. </P> <BR /> <P> <B> <STRONG> Recommendations! <BR /> <BR /> </STRONG> </B> The most important recommendation I can give you is: </P> <BR /> <P> <B> <STRONG> Do not enable ABE unless you really need to. </STRONG> </B> <BR /> <BR /> Let’s take the Users Home shares as an example: <BR /> <BR /> Usually there is no user browsing manually through this structure, but instead the users get a mapped drive pointing to their folder. So the usability aspect does not count.&nbsp; Additionally most users will know (or can find out from the Office Address book) the names or aliases of their colleagues. So there is no sensitive information to hide here.&nbsp; For ease of management most home shares live in big namespace or server shares, what makes them very unfit to be used with ABE.&nbsp; In many cases the user has full control (or at least write permissions) inside his own home share.&nbsp; Why should I waste my CPU Cycles to filter the requests inside someone’s Home Share? </P> <BR /> <P> Considering all those points, I would be intrigued to learn about a telling argument to enable ABE on User Home Shares or Roaming Profile Shares.&nbsp; Please sound off in the comments. </P> <BR /> <P> If you have a data structure where you really need to enable ABE, your file service concept needs to facilitate these four requirements: </P> <BR /> <P> <B> <STRONG> You need Scalability. </STRONG> </B> <BR /> <BR /> You need the ability to increase the number of CPUs doing the ABE calculations in order to react to increasing numbers (directory sizes, number of clients, usage frequency) and thus performance demand. <BR /> <BR /> The easiest way to achieve this is to do <A href="#" target="_blank"> ABE Filtering exclusively in DFS Domain Namespaces </A> and not on the Fileservers. <BR /> <BR /> By that you can add easily more CPUs by just adding further Namespace Servers in the sites where they are required. <BR /> <BR /> Also keep in mind, that you should have some redundancy and that another server might not be able to take the full additional load of a failing server on top of its own load. </P> <BR /> <P> <B> <STRONG> You need small chunks </STRONG> </B> <BR /> <BR /> The number of objects that ABE needs to check for each calculation is the single most important factor for the performance requirement. <BR /> <BR /> Instead of having a single big 10,000 link namespace (same applies to directories on file servers) build 10 smaller 1,000 link-namespaces and combine them into a DFS Cascade. <BR /> <BR /> By that ABE just needs to filter 1,000 objects for every request. <BR /> <BR /> Just re-do the example calculation above with 250ms, 100ms, 50ms or even less. <BR /> <BR /> You will notice that you are suddenly able to reach very decent numbers in terms of Requests/per Second. <BR /> <BR /> The other nice side effect is, that you will do less calculations, as the user will usually follow only one branch in the directory tree, and is thus not causing ABE calculations for the other branches. </P> <BR /> <P> <B> <STRONG> You need Separation of Workloads. </STRONG> </B> <BR /> <BR /> Having your SQL Server run on the same machine as your ABE Server can cause a lack of Performance for both workloads. <BR /> <BR /> Having ABE run on you Domain Controller exposes your Domain Controller Role to the risk of being starved of CPU Cycles and thus not facilitating Domain Logons anymore. </P> <BR /> <P> <B> <STRONG> You need to test and monitor your performance </STRONG> </B> <BR /> <BR /> In many cases you are deploying a new file service concept into an existing environment. <BR /> <BR /> Thus you can get some numbers regarding QUERY_DIRECTORY requests, from the existing DFS / Fileservers. <BR /> <BR /> Build up your Namespace / Shares as you envisioned and use the File Server Capacity Tool ( <A href="#" target="_blank"> </A> ) to simulate the expected load against it. <BR /> <BR /> Monitor the SMB Service Response Times, the Processor utilization and Queue length and the feel on the client while browsing through the structures. <BR /> <BR /> This should give you an idea on how many servers you will need, and if it is required to go for a slimmer design of the data structures. <BR /> <BR /> Keep monitoring those values through the lifecycle of your file server deployment in order to scale up in time. <BR /> <BR /> Any deployment of new software, clients or the normal increase in data structure size could throw off your initial calculations and test results. <BR /> <BR /> <B> <STRONG> This point should imho be outlined very clearly in any concept documentation. </STRONG> </B> </P> <BR /> <P> This concludes the first part of this Blog Series. <BR /> <BR /> I hope you found it worthwhile and got an understanding how to successfully design a File service with ABE. <BR /> <BR /> Now to round off your knowledge, or if you need to troubleshoot a Performance Issue on an ABE-enabled Server, I strongly encourage you to read the second part of this Blog Series. This post will be updated as soon as it’s live. </P> <BR /> <P> With best regards, <BR /> <BR /> Hubert </P> </BODY></HTML> Fri, 05 Apr 2019 03:20:12 GMT JustinTurner 2019-04-05T03:20:12Z Deploying Group Policy Security Update MS16-072 \ KB3163622 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jun 22, 2016 </STRONG> <BR /> <P> My name is Ajay Sarkaria &amp; I work with the Windows Supportability team at Microsoft. There have been many questions on deploying the newly released security update <A href="#" target="_blank"> MS16-072 </A> . </P> <BR /> <P> This post was written to provide guidance and answer questions needed by administrators to deploy the newly released security update, <A href="#" target="_blank"> MS16-072 </A> that addresses a vulnerability. The vulnerability could allow elevation of privilege if an attacker launches a man-in-the-middle (MiTM) attack against the traffic passing between a domain controller and the target machine on domain-joined Windows computers. </P> <BR /> <P> <STRONG> The table below summarizes the KB article number for the relevant Operating System: </STRONG> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <STRONG> Article # </STRONG> </TD> <TD> <STRONG> Title </STRONG> </TD> <TD> <STRONG> Context / Synopsis </STRONG> </TD> </TR> <TR> <TD> MSKB <A href="#" target="_blank"> 3163622 </A> </TD> <TD> MS16-072: Security Updates for Group Policy: June 14, 2016 </TD> <TD> Main article for MS16-072 </TD> </TR> <TR> <TD> MSKB <A href="#" target="_blank"> 3159398 </A> </TD> <TD> MS16-072: Description of the security update for Group Policy: June 14, 2016 </TD> <TD> MS16-072 for Windows Vista / Windows Server 2008, Window 7 / Windows Server 2008 R2, Windows Server 2012, Window 8.1 / Windows Server 2012 R2 </TD> </TR> <TR> <TD> MSKB <A href="#" target="_blank"> 3163017 </A> </TD> <TD> Cumulative update for Windows 10: June 14, 2016 </TD> <TD> MS16-072 For Windows 10 RTM </TD> </TR> <TR> <TD> MSKB <A href="#" target="_blank"> 3163018 </A> </TD> <TD> Cumulative update for Windows 10 Version 1511 and Windows Server 2016 Technical Preview 4: June 14, 2016 </TD> <TD> MS16-072 For Windows 10 1511 + Windows Server 2016 TP4 </TD> </TR> <TR> <TD> MSKB <A href="#" target="_blank"> 3163016 </A> </TD> <TD> Cumulative Update for Windows Server 2016 Technical Preview 5: June 14 2016 </TD> <TD> MS16-072 For Windows Server 2016 TP5 </TD> </TR> <TR> <TD> TN: <A href="#" target="_blank"> MS16-072 </A> </TD> <TD> Microsoft Security Bulletin MS16-072 - Important </TD> <TD> Overview of changes in MS16-072 </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> </TD> </TR> <TR> <TD> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> What does this security update change? <BR /> <BR /> <BR /> <P> The most important aspect of this security update is to understand the behavior changes affecting the way User Group Policy is applied on a Windows computer. <A href="#" target="_blank"> MS16-072 </A> changes the security context with which user group policies are retrieved. Traditionally, when a user group policy is retrieved, it is processed using the <STRONG> user's security context </STRONG> . </P> <BR /> <P> After <A href="#" target="_blank"> MS16-072 </A> is installed, user group policies are retrieved by using the <STRONG> computer's security context </STRONG> . This <STRONG> by-design </STRONG> behavior change protects domain joined computers from a security vulnerability. </P> <BR /> <P> When a user group policy is retrieved using the computer's security context, the computer account will now <STRONG> need </STRONG> "read" access to retrieve the group policy objects (GPOs) needed to apply to the user. </P> <BR /> <P> Traditionally, all group policies were read if the <EM> "user" </EM> had read access either directly or being part of a domain group e.g. Authenticated Users </P> <BR /> What do we need to check before deploying this security update? <BR /> <BR /> <BR /> <P> As discussed above, by default "Authenticated Users" have <STRONG> "Read" </STRONG> and <STRONG> "Apply Group Policy" </STRONG> on all Group Policy Objects in an Active Directory Domain. </P> <BR /> <P> <STRONG> Below is a screenshot from the Default Domain Policy: </STRONG> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> If permissions on any of the Group Policy Objects in your active Directory domain have not been modified, are using the defaults, and as long as Kerberos authentication is working fine in your Active Directory forest (i.e. there are not Kerberos errors visible in the system event log on client computers while accessing domain resources), there is nothing else you need to make sure before you deploy the security update. </P> <BR /> <P> In some deployments, administrators may have removed the <STRONG> "Authenticated Users" </STRONG> group from some or all Group Policy Objects (Security filtering, <EM> etc </EM> .) </P> <BR /> <P> In such cases, you will need to make sure of the following before you deploy the security update: </P> <BR /> <OL> <BR /> <LI> Check if <STRONG> "Authenticated Users" </STRONG> group read permissions were removed intentionally by the admins. If not, then you should probably add those back. For example, if you do not use any security filtering to target specific group policies to a set of users, you could add <STRONG> "Authenticated Users" </STRONG> back with the default permissions as shown in the example screenshot above. </LI> <BR /> <LI> If the "Authenticated Users" permissions were removed intentionally (security filtering, etc), then as a result of the <STRONG> by-design change </STRONG> in this security update <EM> (i.e. to now use the computer's security context to retrieve user policies) </EM> , you will need to add the computer account retrieving the group policy object (GPO) to <STRONG> "Read" </STRONG> Group Policy (and <EM> not </EM> <STRONG> "Apply group policy </STRONG> "). <BR /> <P> <STRONG> Example Screenshot: </STRONG> </P> <BR /> <P> <IMG src="" /> </P> <BR /> </LI> <BR /> </OL> <BR /> <P> In the above example screenshot, let's say an Administrator wants <EM> "User-Policy" </EM> (Name of the Group Policy Object) to only apply to the user with name <STRONG> "MSFT Ajay" </STRONG> and not to any other user, then the above is how the Group Policy would have been filtered for other users. "Authenticated Users" has been removed intentionally in the above example scenario. </P> <BR /> <P> Notice that no other user or group is included to have "Read" or "Apply Group Policy" permissions other than the default Domain Admins and Enterprise Admins. These groups do not have "Apply Group Policy" by default so the GPO would not apply to the users of these groups &amp; apply only to user <STRONG> "MSFT Ajay" </STRONG> </P> <BR /> <P> <STRONG> What will happen if there are Group Policy Objects (GPOs) in an Active Directory domain that are using security filtering as discussed in the example scenario above? </STRONG> </P> <BR /> <P> <STRONG> Symptoms when you have security filtering Group Policy Objects (GPOs) like the above example and you install the security update MS16-072: </STRONG> </P> <BR /> <UL> <BR /> <LI> Printers or mapped drives assigned through Group Policy Preferences disappear. </LI> <BR /> <LI> Shortcuts to applications on users' desktop are missing </LI> <BR /> <LI> Security filtering group policy does not process anymore </LI> <BR /> <LI> You may see the following change in gpresult: <EM> Filtering: Not Applied (Unknown Reason) </EM> </LI> <LI> If you are using Folder Redirection and the Folder Redirection group policy removal option is set to “Redirect the folder back to the user profile location when policy is removed,” the redirected folders are moved back to the client machine after installing this security update </LI> <BR /> </UL> <BR /> What is the Resolution? <BR /> <BR /> <BR /> <P> Simply adding the "Authenticated Users" group with the "Read" permissions on the Group Policy Objects (GPOs) should be sufficient. Domain Computers are part of the "Authenticated Users" group. "Authenticated Users" have these permissions on any new Group Policy Objects (GPOs) by default. Again, the guidance is to add just "Read" permissions and not "Apply Group Policy" for "Authenticated Users" </P> <BR /> <P> <STRONG> What if adding Authenticated Users with Read permissions is not an option? </STRONG> </P> <BR /> <P> If adding "Authenticated Users" with just "Read" permissions is not an option in your environment, then you will need to add the "Domain Computers" group with "Read" Permissions. If you want to limit it beyond the Domain Computers group: Administrators can also create a new domain group and add the computer accounts to the group so you can limit the "Read Access" on a Group Policy Object (GPO). However, computers will not pick up membership of the new group until a reboot. Also keep in mind that with this security update installed, this additional step is only required if the default "Authenticated Users" Group has been removed from the policy where user settings are applied. </P> <BR /> <P> <STRONG> Example Screenshots: </STRONG> </P> <BR /> <P> <IMG src="" /> <IMG src="" /> </P> <BR /> <P> Now in the above scenario, after you install the security update, as the user group policy needs to be retrieved using the <STRONG> system's security context </STRONG> , (domain joined system being part of the "Domain Computers" security group by default), the client computer will be able to retrieve the user policies required to be applied to the user and the same will be processed successfully. </P> <BR /> How to identify GPOs with issues: <BR /> <BR /> <BR /> <P> In case you have already installed the security update and need to identify Group Policy Objects (GPOs) that are affected, the easy way is just to do a simple <STRONG> gpupdate /force </STRONG> on a Windows client computer and then run the <STRONG> gpresult /h new-report.html </STRONG> -&gt; Open the <EM> new-report.html </EM> and review for any errors like: <EM> "Reason Denied: Inaccessible, Empty or Disabled" </EM> </P> <BR /> <P> <IMG src="" /> </P> <BR /> What if there are lot of GPOs? <BR /> <BR /> <BR /> <P> A script is available which can detect all Group Policy Objects (GPOs) in your domain which may have the “Authenticated Users” missing “Read” Permissions <BR /> You can get the script from here: <A href="#" target="_blank"> </A> </P> <P> <B> Pre-Reqs: </B> </P> <UL> <LI> The script can run only on Windows 7 and above Operating Systems which have the RSAT or GPMC installed or Domain Controllers running Windows Server 2008 R2 and above </LI> <LI> The script works in a single domain scenario. </LI> <LI> The script will detect all GPOs in your domain (Not Forest) which are missing “Authenticated Users” permissions &amp; give the option to add “Authenticated Users” with “Read” Permissions (Not Apply Group Policy). If you have multiple domains in your Active Directory Forest, you will need to run this for each domain. </LI> <UL> <LI> Domain Computers are part of the Authenticated Users group </LI> </UL> <LI> The script can only add permissions to the Group Policy Objects (GPOs) in the same domain as the context of the current user running the script. In a multi domain forest, you must run it in the context of the Domain Admin of the other domain in your forest. </LI> </UL> <P> <B> Sample Screenshots when you run the script: </B> </P> <P> In the first sample screenshot below, running the script detects all Group Policy Objects (GPOs) in your domain which has the “Authenticated Users” missing the Read Permission. </P> <P> <IMG src="" /> </P> <P> If you hit “Y”, you will see the below message: </P> <P> <IMG src="" /> </P> <BR /> What if there are AGPM managed Group Policy Objects (GPOs)? <P> Follow the steps below to add “Authenticated Users” with Read Permissions: </P> <P> To change the permissions for all managed GPO's and add Authenticated Users Read permission follow these steps: </P> <P> Re-import all Group Policy Objects (GPOs) from production into the AGPM database. This will ensure the latest copy of production GPO's. </P> <P> <IMG src="" /> </P> <P> <IMG src="" /> </P> <P> Add either “Authenticated Users” or “Domain Computers” the READ permission using the Production Delegation Tab by selecting the security principal, granting the "READ" role then clicking "OK" </P> <P> <IMG src="" /> </P> <P> Grant the selected security principal the "Read" role. </P> <P> <IMG src="" /> </P> <P> Delegation tab depicting Authenticated Users having the READ permissions. </P> <P> <IMG src="" /> </P> <P> <B> <STRONG> Select and Deploy GPOs again: </STRONG> </B> <BR /> Note:&nbsp; To modify permissions on multiple AGPM-managed GPOs, use shift+click or ctrl+click to select multiple GPO's at a time then deploy them in a single operation. <BR /> CTRL_A does not select all policies. </P> <P> <IMG src="" /> </P> <P> <IMG src="" /> </P> <P> The targeted GPO now have the new permissions when viewed in AD: </P> <P> <IMG src="" /> </P> <P> Below are some Frequently asked Questions we have seen: </P> <BR /> Frequently Asked Questions (FAQs): <BR /> <BR /> <BR /> <P> <STRONG> Q1 </STRONG> ) Do I need to install the fix on only client OS? OR do I also need to install it on the Server OS? </P> <BR /> <P> <STRONG> A1 </STRONG> ) It is recommended you patch Windows and Windows Server computers which are running Windows Vista, Windows Server 2008 and newer Operating Systems (OS), regardless of SKU or role, in your entire domain environment. These updates only change behavior from a client (as in "client-server distributed system architecture") standpoint, but all computers in a domain are "clients" to SYSVOL and Group Policy; even the Domain Controllers (DCs) themselves </P> <BR /> <P> <STRONG> Q2 </STRONG> ) Do I need to enable any registry settings to enable the security update? </P> <BR /> <P> <STRONG> A2) </STRONG> No, this security update will be enabled when you install the MS16-072 security update, however you need to check the permissions on your Group Policy Objects (GPOs) as explained above </P> <BR /> <P> <STRONG> Q3) </STRONG> What will change in regard to how group policy processing works after the security update is installed? </P> <BR /> <P> <STRONG> A3) </STRONG> To retrieve user policy, the connection to the Windows domain controller (DC) prior to the installation of MS16-072 is done under the user's security context. With this security update installed, instead of user's security context, Windows group policy clients will now force local system's security context, therefore forcing Kerberos authentication </P> <BR /> <P> <STRONG> Q4) </STRONG> We already have the security update MS15-011 &amp; MS15-014 installed which hardens the UNC paths for SYSVOL &amp; NETLOGON &amp; have the following registry keys being pushed using group policy: </P> <BR /> <UL> <BR /> <LI> RequirePrivacy=1 </LI> <BR /> <LI> RequireMutualAuthentication=1 </LI> <BR /> <LI> RequireIntegrity=1 </LI> <BR /> </UL> <BR /> <P> Should the UNC Hardening security update with the above registry settings not take care of this vulnerability when processing group policy from the SYSVOL? </P> <BR /> <P> <STRONG> A4) </STRONG> No. UNC Hardening alone will not protect against this vulnerability. In order to protect against this vulnerability, one of the following scenarios must apply: UNC Hardened access is enabled for SYSVOL/NETLOGON as suggested, and the client computer is configured to require Kerberos FAST Armoring </P> <BR /> <P> – OR – </P> <BR /> <P> UNC Hardened Access is enabled for SYSVOL/NETLOGON, and this particular security update ( <A href="#" target="_blank"> MS16-072 \ KB3163622 </A> ) is installed </P> <BR /> <P> <STRONG> Q5) </STRONG> If we have security filtering on Computer objects, what change may be needed after we install the security update? </P> <BR /> <P> <STRONG> A5) </STRONG> Nothing will change in regard to how Computer Group Policy retrieval and processing works </P> <BR /> <P> <STRONG> Q6) </STRONG> We are using security filtering for user objects and after installing the update, group policy processing is not working anymore </P> <BR /> <P> <STRONG> A6) </STRONG> As noted above, the security update changes the way user group policy settings are retrieved. The reason for group policy processing failing after the update is installed is because you may have removed the default "Authenticated Users" group from the Group Policy Object (GPO). The computer account will now need <STRONG> "read" </STRONG> permissions on the Group Policy Object (GPO). You can add <STRONG> "Domain Computers" </STRONG> group with <STRONG> "Read" </STRONG> permissions on the Group Policy Object (GPO) to be able to retrieve the list of GPOs to download for the user </P> <BR /> <P> <STRONG> Example Screenshot as below: </STRONG> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <STRONG> Q7) </STRONG> Will installing this security update impact cross forest user group policy processing? </P> <BR /> <P> <STRONG> A7) </STRONG> <STRONG> No, </STRONG> this security update will not impact cross forest user group policy processing. When a user from one forest logs onto a computer in another forest and the group policy setting "Allow Cross-Forest User Policy and Roaming User Profiles" is enabled, the user group policy during the cross forest logon will be retrieved using the user's security context. </P> <BR /> <P> <STRONG> Q8) </STRONG> Is there a need to specifically add "Domain Computers" to make user group policy processing work or adding "Authenticated Users" with just read permissions should suffice? </P> <BR /> <P> <STRONG> A8) </STRONG> Yes, just adding "Authenticated Users" with Read permissions should suffice. If you already have "Authenticated Users" added with at-least read permissions on a GPO, there is no further action required. "Domain Computers" are by default part of the "Authenticated Users" group &amp; user group policy processing will continue to work. You only need to add "Domain Computers" to the GPO with read permissions if you do not want to add "Authenticated Users" to have "Read" </P> <BR /> <P> Thanks, </P> <BR /> <P> Ajay Sarkaria </P> <BR /> <P> Supportability Program Manager – Windows </P> <P> Edits: <BR /> 6/29/16 – added script link and prereqs <BR /> 7/11/16 – added information about AGPM <BR /> 8/16/16 – added note about folder redirection </P> </BODY></HTML> Fri, 05 Apr 2019 03:20:04 GMT JustinTurner 2019-04-05T03:20:04Z The Version Store Called, and They’re All Out of Buckets <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jun 14, 2016 </STRONG> <BR /> <P> Hello, <A href="#" target="_blank"> Ryan Ries </A> back at it again with another exciting installment of esoteric Active Directory and ESE database details! </P> <P> I think we need to have another little chat about something called the version store. </P> <P> The version store is an inherent mechanism of the Extensible Storage Engine and a commonly seen concept among databases in general. (ESE is sometimes referred to as Jet Blue. Sometimes old codenames are so catchy that they just won’t die.) Therefore, the following information should be relevant to any application or service that uses an ESE database (such as Exchange,) but today I’m specifically focused on its usage as it pertains to Active Directory. </P> <P> The version store is one of those details that the majority of customers will never need to think about. The stock configuration of the version store for Active Directory will be sufficient to handle any situation encountered by 99% of AD administrators. But for that 1% out there with exceptionally large and/or busy Active Directory deployments, (or for those who make “interesting” administrative choices,) the monitoring and tuning of the version store can become a very important topic. And quite suddenly too, as replication throughout your environment grinds to a halt because of version store exhaustion and you scramble to figure out why. </P> <P> The purpose of this blog post is to provide <B> <STRONG> <STRONG> up-to-date </STRONG> </STRONG> </B> (as of the year 2016) information and guidance on the version store, and to do it in a format that may be more palatable to many readers than sifting through reams of old MSDN and TechNet documentation that may or may not be accurate or up to date. I can also offer more practical examples than you would probably get from straight technical documentation. There has been quite an uptick lately in the number of cases we’re seeing here in Support that center around version store exhaustion. While the job security for us is nice, knowing this stuff ahead of time can save you from having to call us and spend lots of costly support hours. </P> Version Store: What is it? <P> As mentioned earlier, the version store is an integral part of the ESE database engine. It’s an area of temporary storage in memory that holds copies of objects that are in the process of being modified, for the sake of providing atomic transactions. This allows the database to roll back transactions in case it can’t commit them, and it allows other threads to read from a copy of the data while it’s in the process of being modified. All applications and services that utilize an ESE database use version store to some extent. The article “ <A href="#" target="_blank"> How the Data Store Works </A> ” describes it well: </P> <P> <I> “ESE provides transactional views of the database. The cost of providing these views is that any object that is modified in a transaction has to be temporarily copied so that two views of the object can be provided: one to the thread inside that transaction and one to threads in other transactions. This copy must remain as long as any two transactions in the process have different views of the object. The repository that holds these temporary copies is called the version store. Because the version store requires contiguous virtual address space, it has a size limit. If a transaction is open for a long time while changes are being made (either in that transaction or in others), eventually the version store can be exhausted. At this point, no further database updates are possible.” </I> <I> </I> </P> <P> When Active Directory was first introduced, it was deployed on machines with a single x86 processor with less than 4 GB of RAM supporting NTDS.DIT files that ranged between 2MB and a few hundred MB. Most of the documentation you’ll find on the internet regarding the version store still has its roots in that era and was written with the aforementioned hardware in mind. Today, things like hardware refreshes, OS version upgrades, cloud adoption and an improved understanding of AD architecture are driving massive consolidation in the number of forests, domains and domain controllers in them, DIT sizes are getting bigger… all while still relying on default configuration values from the Windows 2000 era. </P> <P> The number-one killer of version store is long-running transactions. Transactions that tend to be long-running include, but are not limited to: </P> <P> - Deleting a group with 100,000 members <BR /> - Deleting any object, not just a group, with 100,000 or more forward/back links to clean <BR /> - Modifying ACLs in Active Directory on a parent container that propagate down to many thousands of inheriting child objects <BR /> - Creating new database indices <BR /> - Having underpowered or overtaxed domain controllers, causing transactions to take longer in general <BR /> - Anything that requires boat-loads of database modification <BR /> - Large <A href="#" target="_blank"> SDProp </A> and garbage collection tasks <BR /> - Any combination thereof </P> <P> I will show some examples of the errors that you would see in your event logs when you experience version store exhaustion in the next section. </P> Monitoring Version Store Usage <P> To monitor version store usage, leverage the Performance Monitor (perfmon) counter: </P> <P> <B> '\\dc01\Database ==&gt; Instances(lsass/NTDSA)\Version buckets allocated' </B> <B> </B> </P> <P> <IMG src="" /> <I> <BR /> (Figure 1: The ‘Version buckets allocated’ perfmon counter.) </I> <I> </I> </P> <P> The version store divides the amount of memory that it has been given into “buckets,” or “pages.” Version store pages need not (and in AD, they do not) equal the size of database pages elsewhere in the database. We’ll get into the exact size of these buckets in a minute. </P> <P> During typical operation, when the database is not busy, this counter will be low. It may even be zero if the database really just isn’t doing anything. But when you perform one of those actions that I mentioned above that qualify as “long-running transactions,” you will trigger a spike in the version store usage. Here is an example of me deleting a group that contains 200,000 members, on a DC running 2012 R2 with 1 64bit CPU: </P> <P> <IMG src="" /> <I> (Figure 2: Deleting a group containing 200k members on a 2012 R2 DC with 1 64bit CPU.) </I> <I> </I> </P> <P> The version store spikes to 5332 buckets allocated here, seconds after I deleted the group, but as long as the DC recovers and falls back down to nominal levels, you’ll be alright. If it stays high or even maxed out for extended periods of time, then no more database transactions for you. This includes no more replication. This is just an example using the common member/memberOf relationship, but any linked-value attribute relationship can cause this behavior. (I’ve talked a little about linked value attributes before <A href="#" target="_blank"> here </A> .) There are plenty of other types of objects that may invoke this same kind of behavior, such as deleting an RODC computer object, and then its msDs-RevealedUsers links must be processed, etc.. </P> <P> I’m not saying that deleting a group with fewer than 200K members couldn’t also trigger version store exhaustion if there are other transactions taking place on your domain controller simultaneously or other extenuating circumstances. I’ve seen transactions involving as few as 70K linked values cause major problems. </P> <P> After you delete an object in AD, and the domain controller turns it into a tombstone, each domain controller has to process the linked-value attributes of that object to maintain the referential integrity of the database. It does this in “batches,” usually 1000 or 10,000 depending on Windows version and configuration. This was only <I> very </I> recently documented <A href="#" target="_blank"> here </A> . Since each “batch” of 1000 or 10,000 is considered a single transaction, a smaller batch size will tend to complete faster and thus require less version store usage. (But the overall job will take longer.) </P> <P> An interesting curveball here is that having the AD Recycle Bin enabled will defer this action by an msDs-DeletedObjectLifetime number of days after an object is deleted, since that’s the appeal behind the AD Recycle Bin – it allows you to easily restore deleted objects with all their links intact. (More detail on the AD Recycle Bin <A href="#" target="_blank"> here </A> .) </P> <P> When you run out of version storage, no other database transactions can be committed until the transaction or transactions that are causing the version store exhaustion are completed or rolled back. At this point, most people start rebooting their domain controllers, and this may or may not resolve the immediate issue for them depending on exactly what’s going on. Another thing that may alleviate this issue is offline defragmentation of the database. (Or reducing the links batch size, or increasing the version store size – more on that later.) Again, we’re usually looking at 100+ gigabyte DITs when we see this kind of issue, so we’re essentially talking about pushing the limits of AD. And we’re also talking about hours of downtime for a domain controller while we do that offline defrag and semantic database analysis. </P> <P> Here, Active Directory is completely tapping out the version store. Notice the plateau once it has reached its max: </P> <P> <IMG src="" /> <I> (Figure 3: Version store being maxed out at 13078 buckets on a 2012 R2 DC with 1 64bit CPU.) </I> <I> </I> </P> <P> So it has maxed out at 13,078 buckets. </P> <P> When you hit this wall, you will see events such as these in your event logs: </P> <P> Log Name: Directory Service <BR /> Source: Microsoft-Windows-ActiveDirectory_DomainService <BR /> Date: 5/16/2016 5:54:52 PM <BR /> Event ID: 1519 <BR /> Task Category: Internal Processing <BR /> Level: Error <BR /> Keywords: Classic <BR /> User: S-1-5-21-4276753195-2149800008-4148487879-500 <BR /> Computer: <BR /> Description: <BR /> Internal Error: Active Directory Domain Services could not perform an operation because the database has run out of version storage. </P> <P> And also: </P> <P> Log Name: Directory Service <BR /> Source: NTDS ISAM <BR /> Date: 5/16/2016 5:54:52 PM <BR /> Event ID: 623 <BR /> Task Category: (14) <BR /> Level: Error <BR /> Keywords: Classic <BR /> User: N/A <BR /> Computer: <BR /> Description: <BR /> NTDS (480) NTDSA: The version store for this instance (0) has reached its maximum size of 408Mb. It is likely that a long-running transaction is preventing cleanup of the version store and causing it to build up in size. Updates will be rejected until the long-running transaction has been completely committed or rolled back. </P> <P> The peculiar “408Mb” figure that comes along with that last event leads us into the next section… </P> How big is the Version Store by default? <P> The “ <A href="#" target="_blank"> How the Data Store Works </A> ” article that I linked to earlier says: </P> <P> <I> “The version store has a size limit that is the lesser of the following: one-fourth of total random access memory (RAM) or 100 MB. Because most domain controllers have more than 400 MB of RAM, the most common version store size is the maximum size of 100 MB.” </I> <I> </I> </P> <P> Incorrect. </P> <P> And then you have other articles that have even <A href="#" target="_blank"> gone to print </A> , such as this one, that say: </P> <P> <I> “Typically, the version store is 25 percent of the physical RAM.” </I> <I> </I> </P> <P> Extremely incorrect. </P> <P> What about my earlier question about the bucket size? Well if you consulted <A href="#" target="_blank"> this </A> KB article you would read: </P> <P> <I> “ </I> <I> The value for the setting is the number of 16KB memory chunks that will be reserved.” </I> <I> </I> </P> <P> Nope, that’s wrong. </P> <P> Or if I go to the <A href="#" target="_blank"> MSDN documentation </A> for ESE: </P> <P> <I> “JET_paramMaxVerPages </I> <BR /> <I> This parameter reserves the requested number of version store pages for use by an instance. </I> <I> </I> </P> <P> <I> … </I> <I> </I> </P> <P> <I> Each version store page as configured by this parameter is 16KB in size.” </I> <I> </I> </P> <P> Not true. </P> <P> The pages are not 16KB anymore on 64bit DCs. And the only time that the “100MB” figure was ever even close to accurate was when domain controllers were 32bit and had 1 CPU. But today, domain controllers are 64bit and have lots of CPUs. Both version store bucket size <I> and </I> number of version store buckets allocated by default both double based on whether your domain controller is 32bit or 64bit. And the figure also scales a little bit based on how many CPUs are in your domain controller. </P> <P> So without further ado, here is how to calculate the <I> actual </I> number of buckets that Active Directory will allocate by default: </P> <P> <B> (2 * (3 * (15 + 4 + 4 * #CPUs)) + 6400) * PointerSize / 4 </B> <B> </B> </P> <P> Pointer size is 4 if you’re using a 32bit processor, and 8 if you’re using a 64bit processor. </P> <P> And secondly, version store pages are 16KB if you’re on a 32bit processor, and 32KB if you’re on a 64bit processor. So using a 64bit processor effectively quadruples the default size of your AD version store. To convert number of buckets allocated into bytes for a 32bit processor: </P> <P> <B> (((2 * (3 * (15 + 4 + 4 * 1)) + 6400) * 4 / 4) * 16KB) / 1MB </B> <B> </B> </P> <P> And for a 64bit processor: </P> <P> <B> (((2 * (3 * (15 + 4 + 4 * 1)) + 6400) * 8 / 4) * 32KB) / 1MB </B> <B> </B> </P> <P> So using the above formulae, the version store size for a single-core, 64bit DC would be ~408MB, which matches that event ID 623 we got from ESE earlier. It also conveniently matches 13078 * 32KB buckets, which is where we plateaued with our perfmon counter earlier. </P> <P> If you had a 4-core, 64bit domain controller, the formula would come out to ~412MB, and you will see this line up with the event log event ID 623 on that machine. When a 4-core, Windows 2008 R2 domain controller with default configuration runs out of version store: </P> <P> Log Name:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Directory Service <BR /> Source:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; NTDS ISAM <BR /> Date:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5/15/2016 1:18:25 PM <BR /> Event ID:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 623 <BR /> Task Category: (14) <BR /> Level:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Error <BR /> Keywords:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Classic <BR /> User:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A <BR /> Computer:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <BR /> Description: <BR /> NTDS (476) NTDSA: The version store for this instance (0) has reached its maximum size of <B> <STRONG> 412Mb </STRONG> </B> . It is likely that a long-running transaction is preventing cleanup of the version store and causing it to build up in size. Updates will be rejected until the long-running transaction has been completely committed or rolled back. </P> <P> The version store size for a single-core, 32bit DC is ~102MB. This must be where the original “100MB” adage came from. But as you can see now, that information is woefully outdated. </P> <P> The <B> 6400 </B> number in the equation comes from the fact that 6400 is the absolute, hard-coded minimum number of version store pages/buckets that AD will give you. Turns out that’s about 100MB, if you assumed 16KB pages, or 200MB if you assume 32KB pages. The interesting side-effect from this is that the documented “ <B> <STRONG> EDB max ver pages (increment over the minimum) </STRONG> </B> ” registry entry, which is <A href="#" target="_blank"> the supported way </A> of increasing your version store size, doesn’t actually have any effect unless you set it to some value greater than 6400 decimal. If you set that registry key to something less than 6400, then it will just get overridden to 6400 when AD starts. But if you set that registry entry to, say, 9600 decimal, then your version store size calculation will be: </P> <P> <B> (((2 *(3 * (15 + 4 + 4 * 1)) + 9600) * 8 / 4) * 32KB) / 1MB = 608.6MB </B> <B> </B> </P> <P> For a 64bit, 1-core domain controller. </P> <P> So let’s set those values on a DC, then run up the version store, and let’s get empirical up in here: </P> <P> <IMG src="" /> <I> (Figure 4: Version store exhaustion at 19478 buckets on a 2012 R2 DC with 1 64bit CPU.) </I> <I> </I> </P> <P> <B> </B> </P> <P> <B> <STRONG> (19478 * 32KB) / 1MB = 608.7MB </STRONG> </B> <B> </B> </P> <P> And wouldn’t you know it, the event log now reads: </P> <P> <IMG src="" /> <I> (Figure 5: The event log from the previous version store exhaustion, showing the effect of setting the “EDB max ver pages (increment over the minimum)” registry value to 9600.) </I> <I> </I> </P> <P> Here’s a table that shows version store sizes based on the “EDB max ver pages (increment over the minimum)” value and common CPU counts: </P> <TABLE> <TBODY><TR> <TD> <P> <STRONG> Buckets </STRONG> </P> </TD> <TD> <P> <STRONG> </STRONG> </P> </TD> <TD> <P> <STRONG> 1 CPU </STRONG> </P> </TD> <TD> <P> <STRONG> 2 CPUs </STRONG> </P> </TD> <TD> <P> <STRONG> 4 CPUs </STRONG> </P> </TD> <TD> <P> <STRONG> 8 CPUs </STRONG> </P> </TD> <TD> <P> <B> <STRONG> 16 CPUs </STRONG> </B> </P> </TD> </TR> <TR> <TD> <P> <B> 6400 </B> </P> <P> <B> (The default) </B> </P> </TD> <TD> </TD> <TD> <P> x64: 410 MB </P> <P> x86: 103 MB </P> </TD> <TD> <P> x64: 412 MB </P> <P> x86: 103 MB </P> </TD> <TD> <P> x64: 415 MB </P> <P> x86: 104 MB </P> </TD> <TD> <P> x64: 421 MB </P> <P> x86: 105 MB </P> </TD> <TD> <P> x64: 433 MB </P> <P> x86: 108 MB </P> </TD> </TR> <TR> <TD> <P> <B> 9600 </B> </P> </TD> <TD> </TD> <TD> <P> x64: 608 MB </P> <P> x86: 152 MB </P> </TD> <TD> <P> x64: 610 MB </P> <P> x86: 153 MB </P> </TD> <TD> <P> x64: 613 MB </P> <P> x86: 153 MB </P> </TD> <TD> <P> x64: 619MB </P> <P> x86: 155 MB </P> </TD> <TD> <P> x64: 631 MB </P> <P> x86: 158 MB </P> </TD> </TR> <TR> <TD> <P> <B> 12800 </B> </P> </TD> <TD> </TD> <TD> <P> x64: 808 MB </P> <P> x86: 202 MB </P> </TD> <TD> <P> x64: 810 MB </P> <P> x86: 203 MB </P> </TD> <TD> <P> x64: 813 MB </P> <P> x86: 203 MB </P> </TD> <TD> <P> x64: 819 MB </P> <P> x86: 205 MB </P> </TD> <TD> <P> x64: 831 MB </P> <P> x86: 208 MB </P> </TD> </TR> <TR> <TD> <P> <B> 16000 </B> </P> </TD> <TD> </TD> <TD> <P> x64: 1008 MB </P> <P> x86: 252 MB </P> </TD> <TD> <P> x64: 1010 MB </P> <P> x86: 253 MB </P> </TD> <TD> <P> x64: 1013 MB </P> <P> x86: 253 MB </P> </TD> <TD> <P> x64: 1019 MB </P> <P> x86: 255 MB </P> </TD> <TD> <P> x64: 1031 MB </P> <P> x86: 258 MB </P> </TD> </TR> <TR> <TD> <P> <B> 19200 </B> </P> </TD> <TD> </TD> <TD> <P> x64: 1208 MB </P> <P> x86: 302 MB </P> </TD> <TD> <P> x64: 1210 MB </P> <P> x86: 303 MB </P> </TD> <TD> <P> x64: 1213 MB </P> <P> x86: 303 MB </P> </TD> <TD> <P> x64: 1219 MB </P> <P> x86: 305 MB </P> </TD> <TD> <P> x64: 1231 MB </P> <P> x86: 308 MB </P> </TD> </TR> </TBODY></TABLE> <P> Sorry for the slight rounding errors – I just didn’t want to deal with decimals. As you can see, the number of CPUs in your domain controller only has a slight effect on the version store size. The processor architecture, however, makes all the difference. Good thing <B> <STRONG> absolutely no one uses x86 DCs anymore, right </STRONG> ? </B> </P> <P> Now I want to add a final word of caution. </P> <P> I want to make it clear that we recommend changing the “EDB max ver pages (increment over the minimum)” <I> only when necessary </I> ; when the event ID 623s start appearing. (If it ain’t broke, don’t fix it.) I also want to reiterate the warnings that appear on the <A href="#" target="_blank"> support KB </A> , that you <B> must not </B> set this value arbitrarily high, you <I> should </I> increment this setting in small (50MB or 100MB increments,) and that if setting the value to 19200 buckets still does not resolve your issue, then you should contact Microsoft Support. If you are going to change this value, it is advisable to change it consistently across all domain controllers, but you must also carefully consider the processor architecture and available memory on each DC before you change this setting. The version store requires a contiguous allocation of memory – precious real-estate – and raising the value too high can prevent lsass from being able to perform other work. Once the problem has subsided, you should then return this setting back to its default value. </P> <P> In my next post on this topic, I plan on going into more detail on how one might actually troubleshoot the issue and track down the reason behind why the version store exhaustion is happening. </P> Conclusions <P> There is a lot of old documentation out there that has misled many an AD administrator on this topic. It was essentially accurate at the time it was written, but AD has evolved since then. I hope that with this post I was able to shed more light on the topic than you probably ever thought was necessary. It’s an undeniable truth that more and more of our customers continue to push the limits of AD beyond that which was originally conceived. I also want to remind the reader that the majority of the information in this article is AD-specific. If you’re thinking about Exchange or Certificate Services or Windows Update or DFSR or anything else that uses an ESE database, then you need to go figure out your own application-specific details, because we don’t use the same page sizes or algorithms as those guys. </P> <P> I hope this will be valuable to those who find themselves asking questions about the ESE version store in Active Directory. </P> <P> With love, </P> <P> Ryan “Buckets of Fun” Ries </P> </BODY></HTML> Fri, 05 Apr 2019 03:17:28 GMT Ryan Ries 2019-04-05T03:17:28Z Setting up Virtual Smart card logon using Virtual TPM for Windows 10 Hyper-V VM Guests <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on May 11, 2016 </STRONG> <BR /> Hello Everyone, my name is <A href="#" target="_blank"> Raghav </A> and I’m a Technical Advisor for one of the Microsoft Active Directory support teams. This is my first blog and today I’ll share with you how to configure a Hyper-V environment in order to enable virtual smart card logon to VM guests by leveraging a new Windows 10 feature: <B> virtual Trusted Platform Module (TPM). </B> <B> <BR /> </B> Here’s a quick overview of the terminology discussed in this post: <UL> <LI> Smart cards are physical authentication devices, which improve on the concept of a password by requiring that users actually have their smart card device with them to access the system, in addition to knowing the PIN, which provides access to the smart card. </LI> <LI> Virtual smart cards (VSCs) emulate the functionality of traditional smart cards, but instead of requiring the purchase of additional hardware, they utilize technology that users already own and are more likely to have with them at all times. Theoretically, any device that can provide the three key properties of smart cards (non-exportability, isolated cryptography, and anti-hammering) can be commissioned as a VSC, though the Microsoft virtual smart card platform is currently limited to the use of the Trusted Platform Module (TPM) chip onboard most modern computers. This blog will mostly concern TPM virtual smart cards. <BR /> For more information, read <A href="#" target="_blank"> Understanding and Evaluating Virtual Smart Cards </A> . </LI> <LI> Trusted Platform Module - (As Christopher Delay explains in his <A href="#" target="_blank"> blog </A> ) TPM is a cryptographic device that is attached at the chip level to a PC, Laptop, Tablet, or Mobile Phone. The TPM securely stores measurements of various states of the computer, OS, and applications. These measurements are used to ensure the integrity of the system and software running on that system. The TPM can also be used to generate and store cryptographic keys. Additionally, cryptographic operations using these keys take place on the TPM preventing the private keys of certificates from being accessed outside the TPM. </LI> <LI> <STRONG> Virtualization-based security – The following Information is taken directly from </STRONG> <A href="#" target="_blank"> </A> <STRONG> </STRONG> </LI> <UL> <LI> One of the most powerful changes to Windows 10 is virtual-based security. Virtual-based security (VBS) takes advantage of advances in PC virtualization to change the game when it comes to protecting system components from compromise. VBS is able to isolate some of the most sensitive security components of Windows 10. These security components aren’t just isolated through application programming interface (API) restrictions or a middle-layer: They actually run in a different virtual environment and are isolated from the Windows 10 operating system itself. </LI> <LI> VBS and the isolation it provides is accomplished through the novel use of the Hyper V hypervisor. In this case, instead of running other operating systems on top of the hypervisor as virtual guests, the hypervisor supports running the VBS environment in parallel with Windows and enforces a tightly limited set of interactions and access between the environments. Think of the VBS environment as a miniature operating system: It has its own kernel and processes. Unlike Windows, however, the VBS environment runs a micro-kernel and only two processes called trustlets </LI> </UL> <LI> <STRONG> Local Security Authority (LSA) </STRONG> enforces Windows authentication and authorization policies. LSA is a well-known security component that has been part of Windows since 1993. Sensitive portions of LSA are isolated within the VBS environment and are protected by a new feature called Credential Guard. </LI> <LI> Hypervisor-enforced code integrity verifies the integrity of kernel-mode code prior to execution. This is a part of the <A href="#" target="_blank"> Device Guard </A> feature. </LI> </UL> <BLOCKQUOTE> VBS provides two major improvements in Windows 10 security: a new trust boundary between key Windows system components and a secure execution environment within which they run. A trust boundary between key Windows system components is enabled though the VBS environment’s use of platform virtualization to isolate the VBS environment from the Windows operating system. Running the VBS environment and Windows operating system as guests on top of Hyper-V and the processor’s virtualization extensions inherently prevents the guests from interacting with each other outside the limited and highly structured communication channels between the trustlets within the VBS environment and Windows operating system.VBS acts as a secure execution environment because the architecture inherently prevents processes that run within the Windows environment – even those that have full system privileges – from accessing the kernel, trustlets, or any allocated memory within the VBS environment. In addition, the VBS environment uses TPM 2.0 to protect any data that is persisted to disk. Similarly, a user who has access to the physical disk is unable to access the data in an unencrypted form. </BLOCKQUOTE> <IMG src="" /> VBS requires a system that includes: <UL> <LI> Windows 10 Enterprise Edition </LI> <LI> A-64-bit processor </LI> <LI> UEFI with Secure Boot </LI> <LI> Second-Level Address Translation (SLAT) technologies (for example, Intel Extended Page Tables [EPT], AMD Rapid Virtualization Indexing [RVI]) </LI> <LI> Virtualization extensions (for example, Intel VT-x, AMD RVI) </LI> <LI> I/O memory management unit (IOMMU) chipset virtualization (Intel VT-d or AMD-Vi) </LI> <LI> TPM 2.0 </LI> </UL> <B> <I> Note </I> </B> <I> : </I> <I> TPM 1.2 and 2.0 provides protection for encryption keys that are stored in the firmware. TPM 1.2 is not supported on Windows 10 RTM (Build 10240); however, it is supported in Windows 10, Version 1511 (Build 10586) and later. </I> Among other functions, Windows 10 uses the TPM to protect the encryption keys for BitLocker volumes, virtual smart cards, certificates, and the many other keys that the TPM is used to generate. Windows 10 also uses the TPM to securely record and protect integrity-related measurements of select hardware. <BR /> <BR /> <BR /> Now that we have the terminology clarified, let’s talk about how to set this up. <P> <BR /> </P> Setting up Virtual TPMFirst we will ensure we meet the basic requirements on the Hyper-V host. On the Hyper-V host, launch <B> <I> msinfo32 </I> </B> and confirm the following values: <P> The <B> BIOS Mode </B> should state “UEFI”. </P> <IMG src="" /> <B> Secure Boot State </B> should be On. <IMG src="" /> <BR /> Next, we will enable VBS on the Hyper-V host. <OL> <LI> Open up the Local Group Policy Editor by running <B> gpedit.msc </B> . </LI> <LI> Navigate to the following settings: <B> Computer Configuration, Administrative Templates, System, Device Guard </B> . Double-click <B> Turn On Virtualization Based Security </B> . Set the policy to <B> Enabled </B> , click <B> OK </B> , </LI> </OL> <IMG src="" /> <BR /> Now we will enable <A href="#" target="_blank"> Isolated User Mode </A> on the Hyper-V host.1. To do that, go to run type <B> appwiz.cpl </B> on the left pane find Turn Windows Features on or off. <BR /> Check Isolated User Mode, click OK, and then reboot when prompted. <B> </B> <IMG src="" /> <B> </B> <BR /> This completes the initial steps needed for the Hyper-V host. <BR /> <BR /> Now we will enable support for virtual TPM on your Hyper-V VM guest. <BLOCKQUOTE> Note: Support for Virtual TPM is only included in Generation 2 VMs running Windows 10. </BLOCKQUOTE> To enable this on your Windows 10 generation 2 VM. Open up the VM settings and review the configuration under the Hardware, Security section. <B> Enable Secure Boot </B> and Enable Trusted Platform Module should both be selected. <IMG src="" /> <BR /> That completes the Virtual TPM part of the configuration.&nbsp; We will now work on working on virtual Smart Card configuration. <BR /> <A href="#" target="_blank"> Setting up Virtual Smart Card </A> In the next section, we create a certificate template so that we can request a certificate that has the required parameters needed for Virtual Smart Card logon.These steps are adapted from the following TechNet article: <A href="#" title="" target="_blank"> </A> <BR /> <B> Prerequisites and Configuration for Certificate Authority (CA) and domain controllers </B> <UL> <LI> Active Directory Domain Services </LI> <LI> Domain controllers must be configured with a domain controller certificate to authenticate smartcard users. The following article covers Guidelines for enabling smart card logon: <A href="#" target="_blank"> </A> </LI> <LI> An Enterprise Certification Authority running on Windows Server 2012 or Windows Server 2012 R2. Again, Chris’s <A href="#" target="_blank"> blog </A> covers neatly on how to setup a PKI environment. </LI> <LI> Active Directory must have the issuing CA in the NTAuth store to authenticate users to active directory. </LI> </UL> <B> Create the certificate template </B> 1. On the CA console (certsrv.msc) right click on Certificate Template and select Manage <IMG src="" /> <BR /> 2. Right-click the Smartcard Logon template and then click Duplicate Template <IMG src="" /> <BR /> 3. On the Compatibility tab, set the compatibility settings as below <IMG src="" /> <BR /> 4. On the <B> Request Handling tab, in the </B> Purpose section, select Signature and smartcard logon from the drop down <B> menu </B> <IMG src="" /> <BR /> 5. On the Cryptography Tab, select the <B> Requests must use on of the following providers </B> radio button and then select the <B> Microsoft Base Smart Card Crypto Provider option </B> . <IMG src="" /> <BR /> Optionally, you can use a Key Storage Provider (KSP). Choose the KSP, under Provider Category select Key Storage Provider. Then select the Requests must use one of the following providers radio button and select the <B> Microsoft Smart Card Key Storage Provider option </B> . <IMG src="" /> <BR /> 6. On the General tab: Specify a name, such as TPM Virtual Smart Card Logon. Set the validity period to the desired value and choose OK <P> <BR /> </P> 7. Navigate to <B> Certificate Templates </B> . Right click on Certificate Templates and select New, then Certificate Template to Issue.&nbsp; Select the new template you created in the prior steps. <P> <BR /> </P> <IMG src="" /> Note that it usually takes some time for this certificate to become available for issuance. <P> <BR /> </P> Create the TPM virtual smart card <P> Next we’ll create a virtual Smart Card on the <B> Virtual Machine by using the Tpmvscmgr.exe command-line tool. </B> </P> 1. On the Windows 10 Gen 2 Hyper-V VM guest, open an Administrative <B> Command Prompt </B> and run the following command: <B> </B> <B> tpmvsmgr.exe create /name myVSC /pin default /adminkey random /generate </B> <IMG src="" /> You will be prompted for a pin.&nbsp; Enter at least eight characters and confirm the entry.&nbsp; (You will need this pin in later steps) <P> <BR /> </P> Enroll for the certificate on the Virtual Smart Card Certificate on Virtual Machine. 1. In <B> certmgr.msc </B> , right click Certificates, click All Tasks then Request New Certificate. <IMG src="" /> <BR /> 2. On the certificate enrollment select the new template you created earlier. <IMG src="" /> <BR /> 3. It will prompt for the PIN associated with the Virtual Smart Card. Enter the PIN and click <B> OK </B> . <IMG src="" /> <BR /> 4. If the request completes successfully, it will display Certificate Installation results page <IMG src="" /> <BR /> 5. On the virtual machine select sign-in options and select security device and enter the pin <IMG src="" /> <BR /> That completes the steps on how to deploy Virtual Smart Cards using a virtual TPM on virtual machines.&nbsp; Thanks for reading! <BR /> <P> Raghav Mahajan </P> </BODY></HTML> Fri, 05 Apr 2019 03:16:25 GMT JustinTurner 2019-04-05T03:16:25Z Are your DCs too busy to be monitored?: AD Data Collector Set solutions for long report compile times or report data deletion <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Apr 14, 2016 </STRONG> <BR /> Hi all, <A href="#" target="_blank"> Herbert Mauerer </A> here. In this post we’re back&nbsp;to&nbsp;talk about the built-in AD Diagnostics Data collector set available for Active Directory Performance (ADPERF) issues and how to ensure a useful report is generated when your DCs are under heavy load. <BR /> <BR /> Why are my domain controllers so busy you ask? Consider this: Active Directory stands in the center of the Identity Management for many customers. It stores the configuration information for many critical line of business applications. It houses certificate templates, is used to distribute group policy and is the account database among many other things. All sorts of network-based services use Active Directory for authentication and other services. <BR /> <BR /> As mentioned there are many applications which store their configuration in Active Directory, including the details of the user context relative to the application, plus objects specifically created for the use of these applications. <BR /> <BR /> There are also applications that use Active Directory as a store to synchronize directory data. There are products like Forefront Identity Manager (and now <A href="#" target="_blank"> Microsoft Identity Manager </A> ) where synchronizing data is the only purpose. I will not discuss whether these applications are meta-directories or virtual directories, or what class our Office 365 DirSync belongs to… <BR /> <BR /> One way or the other, the volume and complexity of Active Directory queries has a constant trend of increasing, and there is no end in sight. <BR /> So what are my Domain Controllers doing all day? <BR /> We get this questions a lot from our customers. It often seems as if the AD Admins are the last to know what kind of load is put onto the domain controllers by scripts, applications and synchronization engines. And they are not made aware of even significant application changes. <BR /> <BR /> But even small changes can have a drastic effect on the DC performance. DCs are resilient, but even the strongest&nbsp;warrior may fall against an overwhelming force.&nbsp; Think along the lines of "death by a thousand cuts".&nbsp; Consider applications or scripts that run non-optimized or excessive queries&nbsp;on many, many&nbsp;clients during or right after logon and it will feel like a distributed DoS. In this scenario, the domain controller may get bogged down due to the enormous&nbsp;workload issued&nbsp;by the clients. This is one of the classic scenarios when it comes to Domain Controller performance problems. <BR /> What resources exist today to help you troubleshoot AD Performance scenarios? <BR /> We have already discussed the overall topic in this <A href="#" target="_blank"> blog </A> , and today many customer requests start with the complaint that the response times are bad and the LSASS CPU time is high. There also is a <A href="#" target="_blank"> blog </A> post specifically on the toolset we've had since Windows Server 2008. We also updated and brought back the <A href="#" target="_blank"> Server Performance Advisor </A> toolset. This toolset is now more targeted at trend analysis and base-lining.&nbsp; If a video is more your style, Justin Turner revealed our troubleshooting process at <A href="#" target="_blank"> Ignite </A> . <BR /> <BR /> The reports generated by this data collection are hugely useful for understanding what is burdening the Domain Controllers. There are fewer cases where DCs are responding slowly, but there is no significant utilization seen. We released a <A href="#" target="_blank"> blog </A> on that scenario and also gave you a simple method to troubleshoot long-running LDAP queries at our sister <A href="#" target="_blank"> site </A> .&nbsp; So what's new with this post? <BR /> The AD Diagnostic Data Collector set report "report.html" is missing or compile time is very slow <BR /> In recent months, we have seen an increasing number of customers with incomplete Data Collector Set reports. Most of the time, the “report.html” file is missing: <BR /> <BR /> This is a folder where the creation of the report.html file was successful: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> This folder has exceeded the limits for reporting: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Notice the report.html file is missing in the second folder example. Also take note that the ETL and BLG files are bigger. What’s the reason for this? <BR /> <BR /> The Data Collector Set report generation process uncovered: <BR /> <UL> <BR /> <LI> When the data collection ends, the process “tracerpt.exe” is launched to create a report for the folder where the data was collected. </LI> <BR /> <LI> “tracerpt.exe” runs with “below normal” priority so it does not get full CPU attention especially if LSASS is busy as well. </LI> <BR /> <LI> “tracerpt.exe” runs with one worker thread only, so it cannot take advantage of more than one CPU core. </LI> <BR /> <LI> “tracerpt.exe” accumulates RAM usage as it runs. </LI> <BR /> <LI> “tracerpt.exe” has six hours to complete a report. If it is not done within this time, the report is terminated. </LI> <BR /> <LI> The default settings of the system AD data collector deletes the biggest data set first that&nbsp;exceed the 1 Gigabyte limit. The biggest single file in the reports is typically “Active Directory.etl”.&nbsp; The report.html file will not get created if this file does not exist. </LI> <BR /> </UL> <BR /> I worked with a customer recently with a pretty well-equipped Domain Controller (24 server-class CPUs, 256 GB RAM). The customer was kind enough to run a few tests for various report sizes, and found the following metrics: <BR /> <UL> <BR /> <LI> Until the time-out of six hours is hit, “tracerpt.exe” consumes up to 12 GB of RAM. </LI> <BR /> <LI> During this time, one CPU core was allocated 100%. If a DC is in a high-load condition, you may want to increase the base priority of “tracerpt.exe” to get the report to complete. This is at the expense of CPU time potentially impacting purpose of said server and in turn clients. </LI> <BR /> <LI> The biggest data set that could be completed within the six hours had an “Active Directory.etl” of 3 GB. </LI> <BR /> </UL> <BR /> If you have lower-spec and busier machines, you shouldn't expect&nbsp;the same results as this example (On a lower spec machine with a 3 GB ETL file, the report.html file would likely fail to compile within the 6-hour window). <BR /> What a bummer, how do you get Performance Logging done then? <BR /> Fortunately, there are a number of parameters for a Data Collector Set that come to the rescue. Before you can use any of them you first need one of the more custom Data Collector Sets. You can play with a variety of settings, based on the purpose of the collection. <BR /> <BR /> In Performance Monitor you can create a custom set on the "User Defined" folder by right-clicking it, to bring up the <STRONG> New </STRONG> -&gt; <STRONG> Data Collector Set </STRONG> option in the context menu: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> This launches a wizard that prompts you for a number of parameters for the new set. <BR /> <BR /> The first thing it wants is a name for the new set: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> The next step is to select a template. It may be one of the built-in templates or one&nbsp;exported from another computer as an XML file you select through the “Browse” button. In our case, we want to create a clone of “Active Directory Diagnostics”: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> The next step is optional, and it’s specifies the storage location for the reports. You may want to select a volume with more space or lower IO load than the default volume: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> There is one more page in the wizard, but there is no reason to make any more changes here. You can click “Finish” on this page. <BR /> <BR /> The default settings are fine for an idle DC, but if you find your ETL files are too large, your reports are not generated, or it takes too long to process the data, you will likely want to make the following configuration changes. <BR /> <BR /> For a real "Big Data Collector Set" we first want to make important changes to the storage strategy of the set that are available in the “Data Manager” log: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> The most relevant settings are “Resource Policy” and “Maximum Root Path Size”. I recommend starting with the settings as shown below: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Notice, I've changed the Resource policy from "Delete largest" to "Delete oldest". I've also increased the Maximum root path size from 1024 to 2048 MB.&nbsp; You can run some reports to learn what the best size settings are for you. You might very well end up using 10 GB or more for your reports. <BR /> <BR /> The second crucial parameter for your custom sets is the run interval for the data collection. It is five minutes by default. You can adjust that in the properties of the collector in the “Stop Condition” tab. In many cases shortening the data collection is a viable step if you see continuous high load: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> You should avoid going shorter than two minutes, as this is the maximum LDAP query duration by default. (If you have LDAP queries that reach this threshold, they would not show up in a report that is less than two minutes in length.) In fact, I would suggest the minimum interval be set to three minutes. <BR /> <BR /> One very attractive option is automatically restarting the data collection if a certain size of data collection is exceeded. You need to use common sense when you look at the multiple reports, e.g. the ratio of long-running queries is then shown in the logs. But it is definitely better than no report. <BR /> <BR /> If you expect to exceed the 1 GB limit often, you certainly should adjust the total size of collections (Maximum root path size) in the “Data Manager”. <BR /> So how do I know how big the collection is while running it? <BR /> You can take a look at the folder of the data collection in Explorer, but you will notice it is pretty lazy updating it with the current size of the collection: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Explorer only updates the folder if you are doing something with the files. It sounds strange, but attempting to delete a file will trigger an update: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Now that makes more sense… <BR /> <BR /> If you see the log is growing beyond your expectations, you can manually stop it before the stop condition hits the threshold you have configured: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Of course, you can also start and stop the reporting from a command line using the logman instructions in this <A href="#" target="_blank"> post </A> . <B> </B> <BR /> Room for improvement <BR /> We are aware there is room for improvement to get bigger data sets reported in a shorter time. The good news is that much of these special configuration changes won’t be needed once your DCs are running on&nbsp;Windows Server 2016. We will talk about that in a future post. <BR /> <BR /> Thanks for reading. <BR /> <BR /> Herbert </BODY></HTML> Fri, 05 Apr 2019 03:13:35 GMT JustinTurner 2019-04-05T03:13:35Z Previewing Server 2016 TP4: Temporary Group Memberships <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Mar 09, 2016 </STRONG> <BR /> <P> <I> Disclaimer: Windows Server 2016 is still in a Technical Preview state – the information contained in this post may become inaccurate in the future as the product continues to evolve. </I> <I> More specifically, there are still issues being ironed out in other parts of Privileged Access Management in Technical Preview 4 for multi-forest deployments.&nbsp;&nbsp; Watch for more updates as we get closer to general availability! </I> </P> <P> Hello, <A href="#" target="_blank"> Ryan Ries </A> here again with some juicy new Active Directory hotness. Windows Server 2016 is right around the corner, and it’s bringing a ton of new features and improvements with it. Today we’re going to talk about one of the new things you’ll be seeing in Active Directory, which you might see referred to as “expiring links,” or what I like to call “temporary group memberships.” </P> <P> One of the challenges that every security-conscious Active Directory administrator has faced is how to deal with contractors, vendors, temporary employees and anyone else who needs temporary access to resources within your Active Directory environment. Let’s pretend that your Information Security team wants to perform an automated vulnerability scan of all the devices on your network, and to do this, they will need a service account with Domain Administrator privileges for 5 business days. Because you are a wise AD administrator, you don’t like the idea of this service account that will be authenticating against every device on the network having Domain Administrator privileges, but the CTO of the company says that you have to give the InfoSec team what they want. </P> <P> ( <I> Trust me, this stuff really happens. </I> ) </P> <P> So you strike a compromise, claiming that you will grant this service account <I> temporary </I> membership in the Domain Admins group for 5 days while the InfoSec team conducts their vulnerability scan. Now you <I> could </I> just manually remove the service account from the group after 5 days, but you are a busy admin and you know you’re going to forget to do that. You could also set up a scheduled task to run after 5 days that runs a script that removes the service account from the Domain Admins group, but let’s explore a couple of more interesting options. </P> The Old Way <P> One old-school way of accomplishing this is through the use of <A href="#" target="_blank"> dynamic objects </A> in 2003 and later. Dynamic objects are automatically deleted (leaving no tombstone behind) after their entryTTL expires. Using this knowledge, our plan is to create a security group called “Temp DA for InfoSec” as a dynamic object with a TTL (time-to-live) of 5 days. Then we’re going to put the service account into the temporary security group. Then we are going to add the temporary security group to the Domain Admins group. The service account is now a member of Domain Admins because of the nested group membership, and once the temporary security group automatically disappears in 5 days, the nested group membership will be broken and the service account will no longer be a member of Domain Admins. </P> <P> Creating dynamic objects is not as simple as just right-clicking in AD Users &amp; Computer and selecting “New &gt; Dynamic Object,” but it’s still pretty easy if you use ldifde.exe and a simple text file. Below is an example: </P> <P> <IMG src="" /> <BR /> Figure 1: Creating a Dynamic Object with ldifde.exe. </P> <P> <B> </B> </P> <P> dn: cn=Temp DA For InfoSec,ou=Information Security,dc=adatum,dc=com <BR /> changeType: add <BR /> objectClass: group <BR /> objectClass: dynamicObject <BR /> entryTTL: 432000 <BR /> sAMAccountName: Temp DA For InfoSec </P> <P> In the text file, just supply the distinguished name of the security group you want to create, and make sure it has both the group objectClass and the dynamicObject objectClass. I set the entryTTL to 432000 in the screen shot above, which is 5 days in seconds. Import the object into AD using the following command: <BR /> <STRONG> ldifde -i -f dynamicGroup.txt </STRONG> </P> <P> Now if you go look at the newly-created group in AD Users &amp; Computers, you’ll see that it has an entryTTL attribute that is steadily counting down to 0: </P> <P> <IMG src="" /> <BR /> Figure 2: Dynamic Security Group with an expiry date. </P> <P> You can create all sorts of objects as Dynamic Objects by the way, not just groups. But enough about that. We came here to see how the situation has improved in Windows Server 2016. I think you’ll like it better than the somewhat convoluted Dynamic Objects solution I just described. </P> The New Hotness (Windows Server 2016 Technical Preview 4, version 1511.10586.122) <P> For our next trick, we’ll need to enable the Privileged Access Management Feature in our Windows Server 2016 forest. Another example of an optional feature is the AD Recycle Bin. Keep in mind that just like the AD Recycle Bin, once you enable the Privileged Access Management feature in your forest, you can’t turn it off. This feature also requires a Windows Server 2016 or “Windows Threshold” forest functional level: </P> <P> <IMG src="" /> <BR /> Figure 3: This AD Optional Feature requires a Windows Server 2016 or “Windows Threshold” Forest Functional Level. </P> <P> It’s easy to enable with PowerShell: <BR /> <B> Enable-ADOptionalFeature 'Privileged Access Management Feature' -Scope ForestOrConfigurationSet -Target </B> </P> <P> Now that you’ve done this, you can start setting time limits on group memberships directly. It’s so easy: <BR /> <B> Add-ADGroupMember -Identity 'Domain Admins' -Members 'InfoSecSvcAcct' -MemberTimeToLive (New-TimeSpan -Days 5) </B> </P> <P> Now isn’t that a little easier and more straightforward? Our InfoSec service account now has temporary membership in the Domain Admins group for 5 days. And if you want to view the time remaining in a temporary group membership in real time: <BR /> <B> Get-ADGroup 'Domain Admins' -Property member -ShowMemberTimeToLive </B> </P> <P> <IMG src="" /> <BR /> Figure 4: Viewing the time-to-live on a temporary group membership. </P> <P> So that’s cool, but in addition to convenience, there is a real security benefit to this feature that we’ve never had before. I’d be remiss not to mention that with the new Privileged Access Management feature, when you add a temporary group membership like this, the domain controller will actually constrain the Kerberos TGT lifetime to the shortest TTL that the user currently has. What that means is that if a user account only has 5 minutes left in its Domain Admins membership when it logs on, the domain controller will give that account a TGT that’s only good for 5 more minutes before it has to be renewed, and when it is renewed, the <A href="#" target="_blank"> PAC (privilege attribute certificate) </A> will no longer contain that group membership! You can see this in action using klist.exe: </P> <P> <IMG src="" /> <BR /> Figure 5: My Kerberos ticket is only good for about 8 minutes because of my soon-to-expire group membership. </P> <P> Awesome. </P> <P> Lastly, it’s worth noting that this is just one small aspect of the upcoming Privileged Access Management feature in Windows Server 2016. There’s much more to it, like shadow security principals, bastion forests, new integrations with Microsoft Identity Manager, and more. Read more about what’s new in Windows Server 2016 <A href="#" target="_blank"> here </A> . </P> <P> Until next time, </P> <P> <I> Ryan “Domain Admin for a Minute” Ries </I> </P> <P> <EM> <BR /> </EM> </P> <P> <EM> Updated 3/21/16 with additional text in Disclaimer – “ </EM> <I> Disclaimer: Server 2016 is still in a Technical Preview state – the information contained in this post may become inaccurate in the future as the product continues to evolve.&nbsp; More specifically, there are still issues being ironed out in other parts of Privileged Access Management in Technical Preview 4 for multi-forest deployments.&nbsp;&nbsp; Watch for more updates as we get closer to general availability!” </I> </P> </BODY></HTML> Fri, 05 Apr 2019 03:11:45 GMT Ryan Ries 2019-04-05T03:11:45Z Does your logon hang after a password change on win 8.1 /2012 R2/win10? <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jan 11, 2016 </STRONG> <BR /> Hi, Linda Taylor here, Senior Escalation Engineer from the Directory Services team in the UK. <BR /> <BR /> I have been working on this issue which seems to be affecting many of you globally on windows 8.1, 2012 R2 and windows 10, so I thought it would be a good idea to explain the issue and workarounds while we continue to work on a proper fix here. <BR /> <BR /> The symptoms are such that after a password change, logon hangs forever on the welcome screen: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> How annoying…. <BR /> <BR /> The underlying issue is a deadlock between several components including DPAPI and the redirector. <BR /> <BR /> For full details or the issue, workarounds and related fixes check out my post on the ASKPFEPLAT blog here <A href="#" title="" target="_blank"> </A> <BR /> <BR /> <STRONG> This is now fixed in the following updates: </STRONG> <BR /> <BR /> Windows 8.1, 2012 R2, 2012 install: <BR /> <UL> <BR /> <LI> KB3132080&nbsp;Logon freezes after you reset your password in Windows 8.1, or Stop error 0x1000007e in Windows Server 2012 R2 <A href="#" target="_blank"> </A> </LI> <BR /> </UL> <BR /> For Windows&nbsp;10 TH2 build 1511 install: <BR /> <UL> <BR /> <LI> KB3135173&nbsp;Cumulative update for Windows 10 Version 1511: February 9, 2016 <BR /> <A href="#" target="_blank"> </A> </LI> <BR /> </UL> <BR /> I hope this helps, <BR /> <BR /> Linda </BODY></HTML> Fri, 05 Apr 2019 03:10:53 GMT Lindakup 2019-04-05T03:10:53Z Speaking in Ciphers and other Enigmatic tongues…update! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Dec 08, 2015 </STRONG> <BR /> <P> Hi! <A href="#" target="_blank"> Jim Tierney </A> here again to talk to you about Cryptographic Algorithms, SCHANNEL and other bits of wonderment. My original <A href="#" target="_blank"> post </A> on the topic has gone through yet another rewrite to bring you up to date on recent changes in this&nbsp; crypto space. </P> <BR /> <P> So, your company purchases this new super awesome vulnerability and compliance management software suite, and they just ran a scan on your Windows Server 2008 domain controllers and lo! The software reports back that you have weak ciphers enabled, highlighted in <B> RED </B> , flashing, with that "you have failed" font, and including a link to the following Microsoft documentation – <BR /> KB245030 How to Restrict the Use of Certain Cryptographic Algorithms and Protocols in Schannel.dll: <BR /> <A href="#" target="_blank"> </A> </P> <BR /> <P> The report may look similar to this: </P> <BR /> <BLOCKQUOTE> <BR /> <P> SSL Server Has SSLv2 Enabled Vulnerability port 3269/tcp over SSL </P> <BR /> <P> THREAT: <BR /> The Secure Socket Layer (SSL) protocol allows for secure communication between a client and a server. </P> <BR /> <P> There are known flaws in the SSLv2 protocol. A man-in-the-middle attacker can force the communication to a less secure level and then attempt to break the weak encryption. The attacker can also truncate encrypted messages. </P> <BR /> <P> SOLUTION: </P> <BR /> <P> Disable SSLv2. </P> <BR /> </BLOCKQUOTE> <BR /> <P> Upon hearing this information, you fire up your browser and read the aforementioned KB 245030 top to bottom and RDP into your DC’s and begin checking the locations specified by the article. Much to your dismay you notice the locations specified in the article are not correct concerning your Windows 2008 R2 DC’s. On your 2008 R2 DC’s you see the following at this registry location </P> <BR /> <P> HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> "Darn you Microsoft documentation!!!!!!" you scream aloud as you shake your fist in the general direction of Redmond, WA…. </P> <BR /> <P> This is how it looks on a Windows 2003 Server: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Easy now… </P> <BR /> <P> The registry key’s and their content in Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 2012 and 2012 R2 look different from Windows Server 2003 and prior. </P> <BR /> <P> Here is the registry location on Windows 7 – 2012 R2 and its default contents: </P> <BR /> <P> Windows Registry Editor Version 5.00 <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel]" <BR /> EventLogging"=dword:00000001 <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\CipherSuites] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Hashes] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client] <BR /> "DisabledByDefault"=dword:00000001 </P> <BR /> <P> Allow me to explain the above content that is displayed in standard REGEDIT export format: </P> <BR /> <UL> <BR /> <LI> <B> The Ciphers </B> key should contain no values or subkeys <BR /> </LI> <LI> <B> The CipherSuites </B> key should contain no values or subkeys <BR /> </LI> <LI> <B> The Hashes </B> key should contain no values or subkeys <BR /> </LI> <LI> <B> The KeyExchangeAlgorithms </B> key should contain no values or subkeys </LI> <BR /> </UL> <BR /> <P> <B> The <STRONG> Protocols </STRONG> </B> key should contain the following sub-keys and value: <BR /> Protocols <BR /> SSL 2.0 <BR /> Client <BR /> DisabledByDefault REG_DWORD 0x00000001 (value) </P> <BR /> <BR /> <P> The following table lists the Windows SCHANNEL protocols and whether or not they are enabled or disabled by default in each operating system listed: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <B> </B> <B> *UPDATE – TLS 1.1 and TLS 1.2 support have been added to Windows 2008 Standard (not R2). See the following for information on release dates regarding this important update in functionality - </B> <A href="#" target="_blank"> </A> <B> </B> <B> </B> </P> <BR /> <P> Here is the link to the MSDN article that displays the information above in a less colorful way - <A href="#" target="_blank"> </A> </P> <BR /> <P> *Remember to install the following update if you plan on or are currently using SHA512 certificates: <BR /> SHA512 is disabled in Windows when you use TLS 1.2 <BR /> <A href="#" target="_blank"> </A> </P> <BR /> <P> Similar to Windows Server 2003, these protocols can be disabled for the server or client architecture. Meaning that either the protocol can be omitted from the list of supported protocols included in the Client Hello when initiating an SSL connection, or it can be disabled on the server so that even if a client requests SSL 2.0 in a client hello, the server will not respond with that protocol. </P> <BR /> <P> The client and server subkeys designate each protocol. You can disable a protocol for either the client or the server, but disabling Ciphers, Hashes, or CipherSuites affects BOTH client and server sides. You would have to create the necessary subkeys beneath the Protocols key to achieve this. </P> <BR /> <P> For example: </P> <BR /> <P> Windows Registry Editor Version 5.00 <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client] </P> <P> "DisabledByDefault"=dword:00000001 </P> <BR /> <P> </P> <BR /> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Client] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Server] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server] </P> <BR /> <P> This is how it looks in the registry after they have been created: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Client SSL 2.0 is disabled by default on Windows Server 2008, 2008 R2, 2012 and 2012 R2. This means the computer will not use SSL 2.0 to initiate a Client Hello. </P> <BR /> <P> So it looks like this in the registry: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client] <BR /> DisabledByDefault =dword:00000001 </P> <BR /> <P> Just like Ciphers and KeyExchangeAlgorithms, Protocols can be enabled or disabled. </P> <BR /> <P> To disable other protocols, select which side of the conversation on which you want to disable the protocol, and add the "Enabled"=dword:00000000 value. The example below disables the SSL 2.0 for the server in addition to the SSL 2.0 for the client. </P> <BR /> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client] <BR /> DisabledByDefault =dword:00000001 <B> &lt;Default client disabled as I said earlier&gt; </B> </P> <BR /> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server] <BR /> Enabled =dword:00000000 <B> &lt;disables SSL 2.0 server side&gt; </B> </P> <BR /> <P> <B> </B> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> After this, you will need to reboot the server. You probably do not want to disable TLS settings. I just added them here for a visual reference. </P> <BR /> <P> <B> ***For Windows server 2008 R2, if you want to enable Server side TLS 1.1 and 1.2, you MUST create the registry entries as follows: </B> </P> <BR /> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server] <BR /> DisabledByDefault =dword:00000000 <BR /> Enabled =dword:00000001 </P> <BR /> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server] <BR /> DisabledByDefault =dword:00000000 <BR /> Enabled =dword:00000001 </P> <BR /> <P> So why would you go through all this trouble to disable protocols and such, anyway? Well, there may be a regulatory requirement that your company's web servers should only support <A href="#" target="_blank"> Federal Information Processing Standards (FIPS) 140-1/2 certified </A> cryptographic algorithms and protocols. Currently, TLS is the only protocol that satisfies such a requirement. Luckily, enforcing this compliant behavior does not require you to manually modify registry settings as described above. You can enforce FIPS compliance via group policy as explained by the following: </P> <BR /> <P> <B> The effects of enabling the "System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing" security setting in Windows XP and in later versions of Windows </B> - <A href="#" target="_blank"> </A> </P> <BR /> <P> The 811833 article talks specifically about the group policy setting below which by default is NOT defined – <BR /> Computer Configuration\ Windows Settings \Security Settings \Local Policies\ Security Options </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The policy above when applied will modify the following registry locations and their value content. </P> <BR /> <P> Be advised that this FipsAlgorithmPolicy information is stored in different ways as well – </P> <BR /> <BLOCKQUOTE> <BR /> </BLOCKQUOTE> <BR /> <P> <STRONG> Windows 7/2008 </STRONG> <BR /> Windows Registry Editor Version 5.00 <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\FipsAlgorithmPolicy] <BR /> "Enabled"=dword:00000000 &lt;Default is disabled&gt; <B> </B> <B> </B> </P> <BR /> <BR /> <P> <BR /> </P> <BR /> <P> <STRONG> Windows 2003/XP </STRONG> <BR /> Windows Registry Editor Version 5.00 <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa] <BR /> Fipsalgorithmpolicy =dword:00000000 &lt;Default is disabled&gt; <B> </B> <B> </B> </P> <BR /> <BLOCKQUOTE> <BR /> </BLOCKQUOTE> <BR /> <P> Enabling this group policy setting effectively disables everything except TLS. <B> <STRONG> **ATTENTION** </STRONG> </B> If you are applying the FIPS compliant algorithm group policy, the application of this policy will overrule whatever you have manually defined in the SCHANNEL\Protocols key. For example, if you disable TLS 1.0 here - HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0, the application of the FIPS group policy will overrule this and TLS 1.0 will again be available. </P> <P> To remediate this default FIPS group policy behavior, <B> the <STRONG> following NEW Operating System Specific updates MUST BE INSTALLED </STRONG> </B> . Once these updates are installed, the SCHANNEL protocol disabled settings already configured and to be configured will be honored. </P> <P> <IMG src="" /> <BR /> </P> <BR /> <P> <STRONG> More Examples </STRONG> </P> <BR /> <P> Let’s continue with more examples. A vulnerability report may also indicate the presence of other Ciphers it deems to be “weak”. <BR /> Below I have built a .reg file that when imported will disable the following Ciphers: </P> <BR /> <P> 56-bit DES </P> <P> 40-bit RC4 </P> <BR /> <P> <STRONG> Behold! </STRONG> </P> <BR /> <P> Windows Registry Editor Version 5.00 <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 128] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 256] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56] <BR /> "Enabled"=dword:00000000 <BR /> </P> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\NULL] <BR /> "Enabled"=dword:00000000 </P> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 128/128] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128] <BR /> <BR /> "Enabled"=dword:00000000 </P> <P> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128] <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\Triple DES 168] </P> <P> After importing these registry settings, you must reboot the server. </P> <BR /> <P> The vulnerability report might also mention that 40-bit DES is enabled, but that would be a false positive because Windows Server 2008 doesn't support 40-bit DES at all. For example, you might see this in a vulnerability report: </P> <BR /> <BLOCKQUOTE> <BR /> <P> Here is the list of weak SSL ciphers supported by the remote server: </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> Low Strength Ciphers (&lt; 56-bit key) </P> <BR /> <P> SSLv3 </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> EXP-ADH-DES-CBC-SHA Kx=DH(512) Au=None Enc=DES(40) Mac=SHA1 export </P> <BR /> <P> TLSv1 <BR /> <BR /> EXP-ADH-DES-CBC-SHA Kx=DH(512) Au=None Enc=DES(40) Mac=SHA1 export </P> <BR /> </BLOCKQUOTE> <BR /> <P> If this is reported and it is necessary to get rid of these entries you can also disable the Diffie-Hellman Key Exchange algorithm (another components of the two cipher suites described above -- designated with Kx=DH(512)). </P> <BR /> <P> To do this, make the following registry changes: </P> <BR /> <P> <BR /> Windows Registry Editor Version 5.00 <BR /> [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms\Diffie-Hellman] <BR /> "Enabled"=dword:00000000 </P> <P> You have to create the sub-key Diffie-Hellman yourself. Make this change and reboot the server. This step is NOT advised or required….I am offering it as an option to you to make the vulnerability scanning tool pass the test. </P> <BR /> <P> Keep in mind, also, that this will disable any cipher suite that relies upon Diffie-Hellman for key exchange. </P> <BR /> <P> You will probably not want to disable ANY cipher suites that rely on Diffie-Hellman. Secure communications such as IPSec and SSL both use Diffie-Hellman for key exchange. If you are running OpenVPN on a Linux/Unix server you are probably using Diffie-Hellman for key exchange. The point I am trying to make here is you should not have to disable the Diffie-Hellman Key Exchange algorithm to satisfy a vulnerability scan. </P> <BR /> <P> <STRONG> Advanced Ciphers have arrived!!! </STRONG> <BR /> Advanced ciphers were added to Windows 8.1 / Windows Server 2012 R2 computers by KB 2929781, released in April 2014 and again by monthly rollup KB 2919355, released in May 2014 </P> <BR /> <P> Updated cipher suites were released as part of two fixes: <BR /> KB 2919355 for Windows 8.1 and Windows Server 2012 R2 computers <BR /> MS14-066 for Windows 7 and Windows 8 clients and Windows Server 2008 R2 and Windows Server 2012 Servers. </P> <BR /> <P> While these updates shipped new ciphers, the cipher suite priority ordering could not correctly be updated. <BR /> KB 3042058, released Tuesday, March 2015 is a follow up package to correct that issue. This is NOT applicable to 2008 (non R2) </P> <BR /> <P> You can set a preference list for which cipher suites the server will negotiate first with a client that supports them. </P> <BR /> <P> You can review this MSDN article on how to set the cipher suite prioritization list via GPO: <A href="#" target="_blank"> </A> </P> <BR /> <P> Default location and ordering of Cipher Suites: <BR /> HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\0010002 </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Location of Cipher Suite ordering that is modified by setting this group policy – <BR /> Computer Configuration\Administrative Templates\Network\SSL Configuration Settings\SSL Cipher Suite Order </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> When the SSL Cipher Suite Order group policy is modified and applied successfully it modifies the following location in the registry: <BR /> HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Cryptography\Configuration\SSL\0010002 </P> <BR /> <P> The Group Policy would dictate the effective cipher suites. Once this policy is applied, the settings here take precedence over what is in the default location. The GPO should override anything else configured on the computer. The Microsoft SCHANNEL team does not support directly manipulating the Group Policy and Default <B> Cipher suite </B> locations in the registry. </P> <BR /> <P> Group Policy settings are domain settings configured by a domain administrator and should always have precedence over local settings configured by local administrators. </P> <BR /> <P> Below are two cipher suites that were introduced through the June 2016 rollup - <A href="#" target="_blank"> </A> <BR /> <BR /> These were added to try and help with interoperability for older applications since RC4 is soon to be deprecated. <BR /> <STRONG> TLS_DHE_RSA_WITH_AES_128_CBC_SHA <BR /> TLS_DHE_RSA_WITH_AES_256_CBC_SHA </STRONG> <BR /> <BR /> Since these additional cipher suites are now available on clients initiating an SSL connection, any server that has a weak DHE key length under 1024 bits will be rejected by Windows clients. <BR /> <BR /> Below is an explanation of this behavior from the KB that updated Windows 7 clients (Windows 10 has always acted in this manner). <A href="#" target="_blank"> </A> </P> <BLOCKQUOTE> <P> <BR /> “This security update resolves a vulnerability in Windows. The vulnerability could allow information disclosure when Secure Channel (Schannel) allows the use of a weak Diffie-Hellman ephemeral (DHE) key length of 512 bits in an encrypted Transport Layer Security (TLS) session. Allowing 512-bit DHE keys makes DHE key exchanges weak and vulnerable to various attacks. For an attack to be successful, a server has to support 512-bit DHE key lengths. Windows TLS servers send a default DHE key length of 1,024 bits.” </P> </BLOCKQUOTE> <BR /> <P> Being secure is a good thing and depending on your environment, it may be necessary to restrict certain cryptographic algorithms from use. Just make sure you do your diligence about testing these settings. It is also well worth your time to really understand how the security vulnerability software your company just purchased does it’s testing. A double sided network trace will reveal both sides of the client - server hello and what cryptographic algorithms are being offered from each side over the wire. </P> <BR /> <P> Jim “Insert cryptic witticism here” Tierney </P> <BR /> <P> Updates: </P> <BR /> <P> 8/29/16: Added information about June 2016 rollup </P> <BR /> <P> 6/7/17: Updated schannel graphic to include Server 2016 defaults.&nbsp; Also updated information about FIPS policy overwriting manually configured values. </P> <BR /> <P> 7/24/17: Updated information about Server 2008 support. </P> <P> 10/25/18: Added list of updates that need to be installed if using FIPS policy and you need to have manually configured SCHANNEL\Protocols registry entries honored </P> </BODY></HTML> Fri, 05 Apr 2019 03:10:37 GMT JustinTurner 2019-04-05T03:10:37Z Using Repadmin with ADLDS and Lingering objects <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Nov 06, 2015 </STRONG> <BR /> <BR /> <BR /> Hi! <B> Linda Taylor </B> here from the UK Directory Services escalation team. This time on ADLDS, Repadmin, lingering objects and even PowerShell…. <BR /> <BR /> The other day a colleague was trying to remove a lingering object in ADLDS. He asked me about which repadmin syntax would work for ADLDS and it occurred to us both that all the documented examples we found for repadmin were only for AD DS. <BR /> <BR /> <STRONG> So, here are some ADLDS specific examples of repadmin use. </STRONG> <BR /> <BR /> For the purpose of this post I will be using 2 servers with ADLDS. Both servers belong to Domain and they replicate a partition called <B> DC=Fabrikam. </B> <BR /> <TABLE> <TBODY><TR> <TD> LDS1 runs ADLDS on port 50002. </TD> </TR> <TR> <TD> RootDC1 runs ADLDS on port 51995. </TD> </TR> </TBODY></TABLE> <BR /> 1. Who is replicating my partition? <BR /> If you have many servers in your replica set you may want to find out which ADLDS servers are replicating a specific partition. ….Yes! The AD PowerShell module works against ADLDS. <BR /> <BR /> You just need to add the :port on the end of the servername. <BR /> <BR /> One way to list which servers are replicating a specific application partition is to query the attribute <B> msDs-MasteredBy </B> on the respective partition. This attribute contains a list of NTDS server settings objects for the servers which replicate this partition. <BR /> <BR /> You can do this with ADSIEDIT or ldp.exe or PowerShell or any other means. <BR /> <BR /> <B> Powershell Example </B> : Use the <B> Get-ADObject </B> comandlet and I will target my command at localhost:51995.&nbsp; (I am running this on RootDC1) <BR /> <BR /> <IMG src="" /> <BR /> <BR /> Notice there are 2 NTDS Settings objects returned and servername is recorded as <B> ServerName$ADLDSInstanceName. </B> <BR /> <BR /> So this tells me that according to localhost:51995 , <B> DC=Fabrikam </B> partition is replicated between Server LDS1$instance1 and server ROOTDC1$instance1. <BR /> <H3> 2. REPADMIN for ADLDS </H3> <BR /> <B> Generic rules and Tips </B> : <BR /> <UL> <BR /> <LI> For most commands the golden rule is to simply use the port inside the DSA_NAME or DSA_LIST parameters like lds1:50002 or That’s it! </LI> <BR /> </UL> <BR /> For example: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> <BR /> <UL> <BR /> <LI> There are some things which do not apply to ADLDS. That is anything which involves FSMO’s like PDC and RID which ADLDS does not have or Global Catalog – again no such thing in ADLDS. </LI> <BR /> <LI> A very useful switch for ADLDS is the <B> /homeserver </B> switch: </LI> <BR /> </UL> <BR /> Usually by default repadmin assumes you are working with AD and will use the locator or attempt to connect to local server on port 389 if this fails. However, for ADLDS the <B> /Homeserver </B> switch allows you to specify an ADLDS server:port. <BR /> <BR /> For example, If you want to get replication status for all ADLDS servers in a configuration set (like for AD you would run repadmin /showrepl * /csv), for ADLDS you can run the following: <BR /> <BR /> <B> Repadmin /showrepl /homeserver:localhost:50002 * /csv &gt;out.csv </B> <BR /> <BR /> Then you can open the OUT.CSV using something like Excel or even notepad and view a nice summary of the replication status for all servers. You can then sort this and chop it around to your liking. <BR /> <BR /> The below explanation of HOMESERVER is taken from <B> repadmin /listhelp </B> output: <BR /> <TABLE> <TBODY><TR> <TD> If the DSA_LIST argument is a resolvable server name (such as a DNS or WINS name) this will be used as the homeserver. If a non-resolvable parameter is used for the DSA_LIST, repadmin will use the locator to find a server to be used as the homeserver. If the locator does not find a server, repadmin will try the local box (port 389). <BR /> <BR /> The /homeserver:[dns name] option is available to explicitly control home server selection. <BR /> <BR /> This is especially useful when there are more than one forest or configuration set possible. For <BR /> <BR /> example, the DSA_LIST command "fsmo_istg:site1" would target the locally joined domain's directory, so to target an AD/LDS instance, <B> /homeserver:adldsinstance:50000 </B> could be used to resolve the fsmo_istg to site1 defined in the ADAM configuration set on adldsinstance:50000 instead of the fsmo_istg to site1 defined in the locally joined domain. </TD> </TR> </TBODY></TABLE> <BR /> Finally, a particular gotcha that can send you in the wrong troubleshooting direction is a LDAP 0x51 <B> “server down” </B> error which is returned if you forget to add the DSA_NAME and/or port to your repadmin command. Like this: <BR /> <BR /> <IMG src="" /> <BR /> <H3> 3. Lingering objects in ADLDS </H3> <BR /> Just like in AD, you can get lingering objects in AD LDS .The only difference being that there is no Global Catalog in ADLDS, and thus no lingering objects are possible in a Read Only partition. <BR /> <BR /> <STRONG> EVENT ID 1988 or 2042: </STRONG> <BR /> <BR /> If you bring an outdated instance (past TSL) back online In ADLDS you may see event 1988 as per <A href="#" target="_blank"> </A> “Outdated Active Directory objects generate event ID 1988”. <BR /> <BR /> On WS 2012 R2 you will see event 2042 telling you that it has been over TombStoneLifetime since you last replicated so replication is disabled. <BR /> <BR /> <B> What to do next? </B> <BR /> <BR /> First you want to check for lingering objects and remove if necessary. <BR /> <BR /> 1. To check for lingering objects you can use <STRONG> repadmin /removelingeringobjects </STRONG> with the <STRONG> /advisory_mode </STRONG> <BR /> <BR /> My colleague Ian Farr or “Posh chap” as we call him, recently worked with a customer on such a case and put together a great PowerShell blog with a One-Liner for detecting and removing lingering objects from ADLDS with PowerShell. Check it out here: <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> Example event 1946: <BR /> <BR /> <IMG src="" /> <BR /> <BR /> 2.&nbsp; Once you have detected any lingering objects and you have made a decision that you need to remove them, you can remove them using the same repadmin command as in Iain’s blog but without the advisory_mode. <BR /> <BR /> <B> Example command to remove </B> <B> lingering objects: </B> <BR /> <TABLE> <TBODY><TR> <TD> Repadmin /removelingeringobjects <B> lds1:50002 </B> <B> 8fc92fdd-e5ec-45fb-b7d3-120f9f9f192 </B> <B> DC=Fabrikam </B> </TD> </TR> </TBODY></TABLE> <BR /> Where <STRONG> Lds1:50002 </STRONG> is the LDS instance and port where to remove lingering objects <BR /> <BR /> <B> 8fc92fdd-e5ec-45fb-b7d3-120f9f9f192 </B> is DSA guid of a good LDS server/instance <BR /> <BR /> <B> DC=Fabrikam </B> is the partition where to remove lingering objects <BR /> <BR /> For each lingering object removed you will see event 1945. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> You can use Iain’s one-liner again to get a list of all the objects which were removed. <BR /> <BR /> As a good practice you should also do the lingering object checks for the Configuration partition. <BR /> <BR /> Once all lingering objects are removed replication can be re-enabled again and you can go down the pub…(maybe). <BR /> <BR /> I hope this is useful. <BR /> <BR /> Linda. </BODY></HTML> Fri, 05 Apr 2019 03:09:03 GMT Lindakup 2019-04-05T03:09:03Z “Administrative limit for this request was exceeded" Error from Active Directory <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 29, 2015 </STRONG> <BR /> <P> Hello, Ryan Ries here with my first AskDS post! I recently ran into an issue with a particular environment where Active Directory and UNIX systems were being integrated.&nbsp; Microsoft has several attributes in AD to facilitate this, and one of those attributes is the <B> memberUid </B> attribute on security group objects.&nbsp; You add user IDs to the <B> memberUid </B> attribute of the security group, and Active Directory will treat that as group membership from UNIX systems for the purposes of authentication/authorization. </P> <P> <IMG src="" /> </P> <P> All was well and good for a long time. The group grew and grew to over a thousand users, until one day we wanted to add another UNIX user, and we were greeted with this error: </P> <P> <IMG src="" /> </P> <P> <I> “The administrative limit for this request was exceeded.” </I> </P> <P> Wait, there’s a limit on this attribute? I wonder what that limit is. </P> <P> <A href="#" target="_blank"> MSDN documentation </A> states that the <B> rangeUpper </B> property of the <B> memberUid </B> attribute is 256,000. <A href="#" target="_blank"> This support KB </A> also mentions that: </P> <P> <I> “The attribute size limit for the memberUID attribute in the schema is 256,000 characters. It depends on the individual value length on how many user identifiers (UIDs) will fit into the attribute.” </I> </P> <P> And you can even see it for yourself if you fancy a gander at your schema: </P> <P> <IMG src="" /> </P> <P> Something doesn’t add up here – we’ve only added around 1200 users to the <B> memberUid </B> attribute of this security group. Sure it’s a big group, but that doesn’t exceed 256,000 characters; not even close. Adding up all the names that I’ve added to the attribute, I figure it adds up to somewhere around 10,000 characters. Not 256,000. </P> <P> So what gives? </P> <P> <I> (If you’ve been following along and you’ve already figured out the problem yourself, then please contact us! </I> <A href="#" target="_blank"> <I> We’re hiring! </I> </A> <I> ) </I> <I> </I> </P> <P> The problem here is that we’re hitting a <I> different </I> limit as we continue to add members to the <B> memberUid </B> attribute, way before we get to 256k characters. </P> <P> The <B> memberUid </B> attribute is a multivalued attribute, however it is not a <A href="#" target="_blank"> <I> linked </I> </A> attribute.&nbsp; This means that it has a limitation on its maximum size that is less than the 256,000 characters shown on the <B> memberUid </B> <B> attributeSchema </B> object. </P> <P> You can distinguish between which attributes are linked or not based on whether those <B> attributeSchema </B> objects have values in their <B> linkID </B> attribute. </P> <P> Example of a multivalued and linked attribute: </P> <P> <IMG src="" /> </P> <P> Example of a multivalued but not linked attribute: </P> <P> <IMG src="" /> </P> <P> So if the limit is not really 256,000 characters, then what is it? </P> <P> From <A href="#" target="_blank"> <I> How the Data Store Works </I> </A> on TechNet: </P> <BLOCKQUOTE> <P> <I> “The maximum size of a database record is 8110 bytes, based on an 8-kilobyte (KB) page size. Because of variable overhead requirements and the variable number of attributes that an object might have, it is impossible to provide a precise limit for the maximum number of multivalues that an object can store in its attributes. … </I> </P> <P> <I> The only value that can actually be computed is the maximum number of values in a nonlinked, multivalued attribute when the object has only one attribute (which is impossible). In Windows 2000 Active Directory, this number is computed at 1575 values. From this value, taking various overhead estimates into account and generalizing about the other values that the object might store, the practical limit for number of multivalues stored by an object is estimated at 800 nonlinked values per object across all attributes. </I> </P> <P> <B> <I> Attributes that represent links do not count in this value. For example, the members linked, multivalued attribute of a group object can store many thousands of values because the values are links only. </I> </B> </P> <P> <B> <I> The practical limit of 800 nonlinked values per object is increased in Windows Server 2003 and later. </I> </B> <I> When the forest has a functional level of Windows Server 2003 or higher, for a theoretical record that has only one attribute with the minimum of overhead, the maximum number of multivalues possible in one record is computed at 3937. Using similar estimates for overhead, <B> a practical limit for nonlinked multivalues in one record is approximately 1200 </B> . These numbers are provided only to point out that the maximum size of an object is somewhat larger in Windows Server 2003 and later.” </I> </P> <P> </P> <P> </P> <P> </P> </BLOCKQUOTE> <P> (Emphasis is mine.) </P> <P> Alright, so according to the above article, if I’m in an Active Directory domain running all Server 2003 or better, which I am, then a “practical” limit for non-linked multi-value attributes should be approximately 1200 values. </P> <P> So let’s put that to the test, shall we? </P> <P> I wrote a quick and dirty test script with PowerShell that would generate a random 8-character string from a pool of characters (i.e., a random fictitious user ID,) and then add that random user ID to the <B> memberUid </B> attribute of a security group, in a loop until the script encounters an error because the script can’t add any more values: </P> <BLOCKQUOTE> <P> # <STRONG> This script is for testing purposes only! </STRONG> <BR /> $ValidChars = @('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', <BR /> 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', <BR /> 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', <BR /> 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', <BR /> 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', <BR /> 'Y', 'Z', '0', '1', '2', '3', '4', '5', '6', '7','8', '9') </P> <P> [String]$Str = [String]::Empty <BR /> [Int]$Bytes = 0 <BR /> [Int]$Uids = 0 <BR /> While ($Uids -LT 1000000) <BR /> { <BR /> $Str = [String]::Empty <BR /> 1..8 | % { $Str += ($ValidChars | Get-Random) } <BR /> Try <BR /> { <BR /> Set-ADGroup 'TestGroup' -Add @{ memberUid = $Str } -ErrorAction Stop <BR /> } <BR /> Catch <BR /> { <BR /> Write-Error $_.Exception.Message <BR /> Write-Host "$Bytes bytes $Uids users added" <BR /> Break <BR /> } <BR /> $Bytes += 8 <BR /> $Uids += 1 <BR /> } </P> </BLOCKQUOTE> <P> Here’s the output from when I run the script: </P> <P> <IMG src="" /> <B> </B> </P> <P> Huh… whaddya’ know? Approximately 1200 users before we hit the “administrative limit,” just like the article suggests. </P> <P> One way of getting around this attribute's maximum size would be to use nested groups, or to break the user IDs apart into two separate groups… although this may cause you to have to change some code on your UNIX systems. It’s typically not a fun day when you first realize this limit exists. Better to know about it beforehand. </P> <P> Another attribute in Active Directory that could potentially hit a similar limit is the <B> servicePrincipalName </B> attribute, as you can read about in <A href="#" target="_blank"> this AskPFEPlat article </A> . </P> <P> Until next time! </P> <P> Ryan Ries </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 03:08:12 GMT Ryan Ries 2019-04-05T03:08:12Z SHA1 Key Migration to SHA256 for a two tier PKI hierarchy <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 26, 2015 </STRONG> <BR /> <P> Hello. <A href="" target="_blank"> Jim </A> here again to take you through the migration steps for moving your two tier PKI hierarchy from SHA1 to SHA256. I will not be explaining the differences between the two or the supportability / security implementations of either. That information is readily available, easily discoverable and is referenced in the links provided below. Please note the following: </P> <BR /> <P> <STRONG> Server Authentication certificates: CAs must begin issuing new certificates using only the SHA-2 algorithm after January 1, 2016. Windows will no longer trust certificates signed with SHA-1 after January 1, 2017. </STRONG> </P> <BR /> <P> If your organization uses its own PKI hierarchy (you do not purchase certificates from a third-party), you will not be affected by the SHA1 deprecation. Microsoft's SHA1 deprecation plan ONLY&nbsp;APPLIES to certificates issued by members of the <A href="#" title="Microsoft Root Certificate program" target="_blank"> Microsoft Trusted Root Certificate program </A> .&nbsp; Your internal PKI hierarchy may continue to use SHA1; however, it is a security risk and diligence should be taken to move to SHA256 as soon as possible. <STRONG> <BR /> </STRONG> </P> <BR /> <P> In this post, I will be following the steps documented here with some modifications: Migrating a Certification Authority Key from a Cryptographic Service Provider (CSP) to a Key Storage Provider (KSP) - <A href="#" target="_blank"> </A> </P> <BR /> <P> The steps that follow in this blog will match the steps in the TechNet article above with the addition of screenshots and additional information that the TechNet article lacks. </P> <BR /> <P> Additional recommended reading: </P> <BR /> <BLOCKQUOTE> <BR /> <P> The following blog written by Robert Greene will also be referenced and should be reviewed - <A href="" target="_blank"> </A> </P> <BR /> <P> This Wiki article written by Roger Grimes should also be reviewed as well - <A href="#" target="_blank"> </A> </P> <BR /> <P> Microsoft Trusted Root Certificate: Program Requirements - <A href="#" target="_blank"> </A> </P> <BR /> </BLOCKQUOTE> <BR /> <P> The scenario for this exercise is as follows: </P> <BR /> <P> A two tier PKI hierarchy consisting of an Offline ROOT and an Online subordinate enterprise issuing CA. </P> <BR /> <P> Operating Systems: <BR /> Offline ROOT and Online subordinate are both Windows 2008 R2 SP1 </P> <BR /> <P> OFFLINE ROOT <BR /> CANAME - CONTOSOROOT-CA </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> ONLINE SUBORDINATE ISSUING CA <BR /> CANAME – ContosoSUB-CA </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> First, you should verify whether your CA is using a Cryptographic Service Provider (CSP) or Key Storage Provider (KSP). This will determine whether you have to go through all the steps or just skip to changing the CA hash algorithm to SHA2. The command for this is in step 3. The line to take note of in the output of this command is “Provider =”. If the <STRONG> Provider = line </STRONG> is any of the top five service providers highlighted below, the CA is using a CSP and you must do the conversion steps. The RSA#Microsoft Software <B> K </B> ey <B> S </B> torage <B> P </B> rovider and everything below it are KSP’s. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Here is sample output of the command - Certutil –store my &lt;Your CA common name&gt; </P> <BR /> <P> As you can see, the provider is a CSP. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> If you are using a Hardware Storage Module (HSM) you should contact your HSM vendor for special guidance on migrating from a CSP to a KSP. The steps for changing the Hashing algorithm to a SHA2 algorithm would still be the same for HSM based CA’s. </P> <BR /> <P> There are some customers that use their HSM for the CA private / public key, but use Microsoft CSP’s for the Encryption CSP (used for the CA Exchange certificate). </P> <BR /> <P> We will begin at the OFFLINE ROOT. </P> <BR /> <P> BACKUP! BACKUP! BACKUP the CA and Private KEY of both the OFFLINE ROOT and Online issuing CA. If you have more than one CA Certificate (you have renewed multiple times), all of them will need to be backed up. </P> <BR /> <P> Use the MMC to backup the private key or use the CERTSRV.msc and right click the CA name to backup as follows on both the online subordinate issuing and the OFFLINE ROOT CA’s – </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Provide a password for the private key file. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> You may also backup the registry location as indicated in step 1C. </P> <BR /> <P> <STRONG> Step 2 </STRONG> – Stop the CA Service </P> <BR /> <P> <STRONG> Step 3 </STRONG> - This command was discussed earlier to determine the provider. </P> <BR /> <UL> <BR /> <LI> Certutil –store my <B> <I> &lt;Your CA common name&gt; </I> </B> </LI> <BR /> </UL> <BR /> <P> <A> </A> Step 4 and Step 6 from the above referenced <A href="#" target="_blank"> TechNet article </A> should be done via the UI. </P> <BR /> <P> a. Open the MMC - load the Certificates snapin for the LOCAL COMPUTER </P> <BR /> <P> b. Right click each CA certificate (If you have more than 1) - export </P> <BR /> <P> c. Yes, export the private key </P> <BR /> <P> d. Check - Include all certificates in the certification path if possible </P> <BR /> <P> e. Check - Delete the private key if the export is successful </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> f. Click next and continue with the export. </P> <BR /> <P> <B> Step 5 <BR /> Copy the resultant .pfx file to a Windows 8 or Windows Server 2012 computer </B> </P> <BR /> <P> <B> Conversion requires a Windows Server 2012 certutil.exe, as Windows Server 2008 (and prior) do not support the necessary KSP conversion commands. If you want to convert a CA certificate on an ADCS version prior to Windows Server 2012, you must export the CA certificate off of the CA, import onto Windows Server 2012 or later using certutil.exe with the -KSP option, then export the newly signed certificate as a PFX file, and re-import on the original server. </B> </P> <BR /> <P> Run the command in Step 5 on the Windows 8 or Windows Server 2012 computer. </P> <BR /> <UL> <BR /> <LI> Certutil –csp <B> <I> &lt;KSP name&gt; </I> </B> -importpfx <B> <I> &lt;Your CA cert/key PFX file&gt; </I> </B> </LI> <BR /> </UL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <STRONG> Step 6 </STRONG> </P> <BR /> <P> a. To be done on the Windows 8 or Windows Server 2012 computer as previously indicated using the MMC. </P> <BR /> <P> b. Open the MMC - load the Certificates snapin for the LOCAL COMPUTER </P> <BR /> <P> c. Right click the CA certificate you just imported – All Tasks – export </P> <BR /> <BLOCKQUOTE> <BR /> <P> *I have seen an issue where the “Yes, export the private key” is dimmed after running the conversion command and trying to export via the MMC. If you encounter this behavior, simply reimport the .PFX file manually and check the box <B> Mark this key as exportable </B> during the import <B> . </B> This will not affect the previous conversion. </P> <BR /> </BLOCKQUOTE> <BR /> <P> d. Yes, export the private key. </P> <BR /> <P> e. Check - Include all certificates in the certification path if possible </P> <BR /> <P> f. Check - Delete the private key if the export is successful </P> <BR /> <P> g. Click next and continue with the export. </P> <BR /> <P> h. Copy the resultant .pfx file back to the destination 2008 R2 ROOTCA </P> <BR /> <P> <STRONG> Step 7 </STRONG> </P> <BR /> <P> You can again use the UI (MMC) to import the .pfx back to the computer store on the ROOTCA </P> <BR /> <P> <B> *Don’t forget during the import to Mark this key as exportable. </B> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <DIV> <BR /> <P> <STRONG> ***IMPORTANT*** </STRONG> </P> <BR /> <P> If you have renewed you CA multiple times with the same key, after exporting the first CA certificate as indicated above in step 4 and step 6, you are breaking the private key association with the previously renewed CA certificates.&nbsp; This is because you are deleting the private key upon successful export.&nbsp; After doing the conversion and importing the resultant .pfx file on the CA (remembering to mark the private key as exportable), you must run the following command from an elevated command prompt for each of the additional CA certificates that were renewed previously: </P> <BR /> <P> certutil –repairstore MY serialnumber </P> <BR /> <P> The Serial number is found on the details tab of the CA certificate.&nbsp; This will repair the association of the public certificate to the private key. </P> <BR /> </DIV> <BR /> <P> <STRONG> <BR /> </STRONG> <STRONG> Step 8 </STRONG> – </P> <BR /> <P> Your CSP.reg file must contain the information highlighted at the top – </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <STRONG> Step 8c </STRONG> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <STRONG> Step 8d </STRONG> – Run CSP.reg </P> <BR /> <P> <STRONG> Step 9 </STRONG> </P> <BR /> <P> Your EncryptionCSP.reg file must contain the information highlighted at the top – </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <STRONG> Step 9c </STRONG> – verification - certutil -v -getreg ca\encryptioncsp\EncryptionAlgorithm </P> <BR /> <P> <STRONG> Step 9d </STRONG> – Run EncryptionCsp.reg </P> <BR /> <P> <STRONG> Step 10 </STRONG> </P> <BR /> <P> Change the CA hash algorithm to SHA256 </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Start the CA Service </P> <BR /> <P> <STRONG> Step 11 </STRONG> </P> <BR /> <P> For a root CA: You will not see the migration take effect for the CA certificate itself until you complete the migration of the root CA, and then renew the certificate for the root CA. </P> <BR /> <P> Before we renew the OFFLINE ROOT certificate this is how it looks: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Renewing the CA’s own certificate with a new or existing (same) key would depend on the remaining validity of the certificate. If the certificate is at or nearing 50% of its lifetime, it would be a good idea to renew with a new key. See the following for additional information on CA certificate renewal – </P> <BR /> <P> <A href="#" target="_blank"> </A> </P> <BR /> <P> After we renew the OFFLINE ROOT certificate with a new key or the same key, its own Certificate will be signed with the SHA256 signature as indicated in the screenshot below: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Your OFFLINE ROOT CA is now completely configured for SHA256. </P> <BR /> <P> Running CERTUTIL –CRL will generate a new CRL file also signed using SHA256 </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> By default, CRT, CRL and delta CRL files are published on the CA in the following location - <EM> %SystemRoot%\System32\CertSrv\CertEnroll </EM> . The format of the CRL file name is the "sanitized name" of the CA plus, in parentheses, the "key id" of the CA (if the CA certificate has been renewed with a new key) and a .CRL extension. See the following for more information on CRL distribution points and the CRL file name - <A href="#" target="_blank"> </A> </P> <BR /> <P> Copy this new .CRL file to a domain joined computer and publish it to Active Directory while logged on as an Enterprise Administrator from an elevated command prompt. </P> <BR /> <P> Do the same for the new SHA256 ROOT CA certificate. </P> <BR /> <UL> <BR /> <LI> certutil -f -dspublish <B> <I> &lt;.CRT file&gt; </I> </B> RootCA </LI> <BR /> <LI> certutil –f -dspublish <B> <I> &lt;.CRL file&gt; </I> </B> </LI> <BR /> </UL> <BR /> <P> Now continue with the migration of the Online Issuing Subordinate CA. </P> <BR /> <P> <STRONG> Step 1 </STRONG> – Backup the CA database and Private Key. </P> <BR /> <P> Backup the CA registry settings </P> <BR /> <P> <STRONG> Step 2 </STRONG> – Stop the CA Service. </P> <BR /> <P> <STRONG> Step 3 </STRONG> - Get the details of your CA certificates </P> <BR /> <P> Certutil –store my <B> <I> “Your SubCA name” </I> </B> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> I have never renewed the Subordinate CA certificate so there is only one. </P> <BR /> <P> <STRONG> Step 4 – 6 </STRONG> </P> <BR /> <P> As you know from what was previously accomplished with the OFFLINE ROOT, steps 4-6 are done via the MMC and we must do the conversion on a Windows 8 or Windows 2012 or later computer for reasons explained earlier. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <B> *When you import the converted SUBCA .pfx file via the MMC, you must remember to again Mark this key as exportable. </B> </P> <BR /> <P> <STRONG> Step 8 – Step 9 </STRONG> </P> <BR /> <P> Creating and importing the registry files for CSP and CSP Encryption (see above) </P> <BR /> <P> <STRONG> Step 10 </STRONG> - Change the CA hash algorithm to SHA-2 </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Now in the screenshot below you can see the Hash Algorithm is SHA256. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The Subordinate CA’s own certificate is still SHA1. In order to change this to SHA256 you must renew the Subordinate CA’s certificate. When you renew the Subordinate CA’s certificate it will be signed with SHA256. This is because we previously changed the hash algorithm on the OFFLINE ROOT to SHA256. </P> <BR /> <P> Renew the Subordinate CA’s certificate following the proper steps for creating the request and submitting it to the OFFLINE ROOT. Information on whether to renew with a new key or the same key was provided earlier. Then you will copy the resultant .CER file back to the Subordinate CA and install it via the Certification Authority management interface. </P> <BR /> <P> If you receive the following error when installing the new CA certificate – </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Check the newly procured Subordinate CA certificate via the MMC. On the certification path tab, it will indicate under certificate status that – “The signature of the certificate cannot be verified” </P> <BR /> <P> This error could have several causes. You did not –dspublish the new OFFLINE ROOT .CRT file and .CRL file to Active Directory as previously instructed. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Or you did publish the Root CA certificate but the Subordinate CA has not done Autoenrollment (AE) yet and therefore has not downloaded the “NEW” Root CA certificate via AE methods, or AE may be disabled on the CA all together. </P> <BR /> <P> After the files are published to AD and after verification of AE and group policy updates on the Subordinate CA, the install and subsequent starting of Certificate Services will succeed. </P> <BR /> <P> Now in addition to the Hash Algorithm being SHA256 on the Subordinate CA, the Signature on its own certificate will also be SHA256. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The Subordinate CA’s .CRL files are also now signed with SHA256 – </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Your migration to SHA256 on the Subordinate CA is now completed. </P> <BR /> <P> I hope you found this information helpful and informative. I hope it will make your SHA256 migration project planning and implementation less daunting. </P> <BR /> <P> Jim Tierney </P> </BODY></HTML> Fri, 05 Apr 2019 03:07:12 GMT JustinTurner 2019-04-05T03:07:12Z Manage Developer Mode on Windows 10 using Group Policy <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Sep 22, 2015 </STRONG> <BR /> <P> Hi All, </P> <P> We’ve had a few folks want to know how to disable Developer Mode using Group Policy, but still allow side-loaded apps to be installed.&nbsp; Here is a quick note how to do this. (A more AD-centric post from Linda Taylor is on it way) </P> <P> On the Windows 10 device, click on <B> Windows logo key‌ <IMG src="" /> </B> and then click on <B> Settings </B> . </P> <P> <IMG src="" /> </P> <P> Click on <B> Update &amp; Security </B> </P> <P> <IMG src="" /> </P> <P> From the left-side pane, select <B> For developers </B> and from the right-side pane, choose the level that you need. </P> <P> <IMG src="" /> </P> <P> · If you choose <B> Sideload apps </B> : You can install an .appx and any certificate that is needed to run the app with the PowerShell script that is created with the package. Or you can use manual steps to install the certificate and package separately. </P> <P> · If you choose <B> Developer mode </B> : You can debug your apps on that device. You can also sideload any apps if you choose developer mode, even ones that you have not developed on the device. You just have to install the .appx with its certificate for sideloading. </P> <P> <B> Use Group Policy Editor (gpedit) to enable your device: </B> </P> <P> Using Group Policy Editor (gpedit.msc), a developer mode can be enabled or disabled on computers running Windows 10. </P> <P> 1. Open the Windows Run box using keyboard, press <B> Windows logo key‌ <IMG src="" /> +R </B> </P> <P> 2. Type in <B> gpedit.msc </B> and then press <B> Enter </B> . </P> <P> 3. In Group Policy Editor navigate to <B> Computer Configuration\Administrative Templates\Windows Components\App Package Deployment </B> . </P> <P> 4. From the right-side pane, double click on <B> Allow all trusted apps to install </B> and click on <B> Enabled </B> button. </P> <P> 5. Click on <STRONG> Apply </STRONG> and then <STRONG> OK </STRONG> . </P> <P> Notes: </P> <P> · Allow all trusted apps to install </P> <P> o If you want to disable access to everything in for developers’ disable this policy setting. </P> <P> o If you enable this policy setting, you can install any LOB or developer-signed Windows Store app. </P> <P> <STRONG> If you want to allow side-loading apps to install but disable the other options in developer mode disable "Developer mode" and enable "Allow all trusted apps to install" </STRONG> </P> <P> · Group policies are applied every 90 minutes, plus or minus a random amount up to 30 minutes. To apply the policy immediately, run gpupdate from the command prompt. </P> <P> For more information on Developer Mode, see the following MSDN article: <BR /> <A href="#" title=";MSPPError=-2147217396" target="_blank">;MSPPError=-2147217396 </A> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 03:03:29 GMT JustinTurner 2019-04-05T03:03:29Z Windows 10 Group Policy (.ADMX) Templates now available for download <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Aug 07, 2015 </STRONG> <BR /> <P> Hi everyone, Ajay here.&nbsp; I wanted to let you all know that we have released the Windows 10 Group Policy (.ADMX) templates on our download center as an MSI installer package. These .ADMX templates are released as a separate download package so you can manage group policy for Windows 10 clients more easily. </P> <P> This new package includes additional (.ADMX) templates which are not included in the RTM version of Windows 10. </P> <P> </P> <OL> <LI> DeliveryOptimization.admx </LI> <LI> fileservervssagent.admx </LI> <LI> gamedvr.admx </LI> <LI> grouppolicypreferences.admx </LI> <LI> grouppolicy-server.admx </LI> <LI> mmcsnapins2.admx </LI> <LI> terminalserver-server.admx </LI> <LI> textinput.admx </LI> <LI> userdatabackup.admx </LI> <LI> windowsserver.admx </LI> </OL> <P> To download the Windows 10 Group Policy (.ADMX) templates, please visit <A href="#" target="_blank"> </A> </P> <P> To review which settings are new in Windows 10, review the Windows 10 ADMX spreadsheet here: <A href="#" target="_blank"> </A> </P> <P> Ajay Sarkaria </P> <P> </P> <BR /> </BODY></HTML> Fri, 05 Apr 2019 03:02:45 GMT TechCommunityAPIAdmin 2019-04-05T03:02:45Z Troubleshoot ADFS 2.0 with these new articles <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on May 06, 2015 </STRONG> <BR /> Hi all, here's a quick public service announcement to highlight some recently published ADFS 2.0 troubleshooting guidance. We get a lot of questions about configuring and troubleshooting ADFS 2.0, so our support and content teams have pitched in to create a series of troubleshooting articles to cover the most common scenarios. <BR /> <BR /> <A href="#" target="_blank"> ADFS 2.0 connectivity problems: "This page cannot be displayed" </A> - You receive a “This page cannot be displayed” error message when you try to access an application on a website that uses AD FS 2.0. Provides a resolution. <BR /> <BR /> <A href="#" target="_blank"> ADFS 2.0 ADFS service configuration and startup issues-ADFS service won't start </A> - Provides troubleshooting steps for ADFS service configuration and startup problems. <BR /> <BR /> <A href="#" target="_blank"> ADFS 2.0 Certificate problems-An error occurred during an attempt to build the certificate chain </A> - A certificate-related change in AD FS 2.0 causes certificate, SSL, and trust errors triggers errors including Event 133. Provides a resolution. <BR /> <BR /> <A href="#" target="_blank"> ADFS 2.0 authentication problems: "Not Authorized HTTP error 401" </A> - You cannot authenticate an account in AD FS 2.0, that you are prompted for credentials, and event 111 is logged. Provides a resolution. <BR /> <BR /> <A href="#" target="_blank"> ADFS 2.0 claims rules problems: "Access is denied" </A> - You receive an "Access Denied" error message when you try to access an application in AD FS 2.0. Provides a resolution. <BR /> <BR /> <P> We hope you will find these troubleshooters useful. You can provide feedback and comments at the bottom of each KB if you want to help us improve them. </P> </BODY></HTML> Fri, 05 Apr 2019 03:02:40 GMT JustinTurner 2019-04-05T03:02:40Z A Treatise on Group Policy Troubleshooting–now with GPSVC Log Analysis! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Apr 17, 2015 </STRONG> <BR /> <P> Hi all, <B> David Ani </B> here from Romania. This guide outlines basic steps used to troubleshoot Group Policy application errors using the Group Policy Service Debug logs (gpsvc.log). A basic understanding of the logging discussed here will save time and may prevent you from having to open a support ticket with Microsoft. Let's get started. </P> <P> </P> <P> The gpsvc log has evolved from the User Environment Debug Logs (userenv log) in Windows XP and Windows Server 2003 but the basics are still there and the pattern is the same. There are also changes from 2008 to 2012 in the logging itself but they are minor and will not prevent you from understanding your first steps in analyzing the debug logs. </P> <P> <B> Overview of Group Policy Client Service (GPSVC) </B> </P> <UL> <LI> One of the major changes that came with Windows Vista and later operating systems is the new <B> Group Policy Client </B> service. Earlier operating systems used the <B> WinLogon </B> service to apply Group Policy. However, the new Group Policy Client service improves the overall stability of the Group Policy infrastructure and the operating system by isolating it from the WinLogon process. <BR /> </LI> <LI> The service is responsible for applying settings configured by administrators to computers and users through the Group Policy component. If the service is stopped or disabled, the settings will not be applied, so applications and components will not be manageable through Group Policy. Please keep in mind that, to increased security, users cannot start or stop the Group Policy Client service. In the Services snap-in, the options to start, stop, pause, and resume the Group Policy client are unavailable. <BR /> </LI> <LI> Finally, any components or applications that depend on the Group Policy component will not be functional if the service is stopped or disabled. </LI> </UL> <P> Note: The important thing to remember is that the Group Policy Client is a service running on every OS since Vista and is responsible for applying GPOs. The process itself will run under a svchost instance, which you can check by using the “tasklist /svc” command line. </P> <P> <IMG src="" /> </P> <P> One final point: Since the startup value for the service is Automatic ( <STRONG> Trigger Start </STRONG> ), you may not always see it in the list of running services. It will start, perform its actions, and then stop. </P> <P> <B> Group Policy processing overview <BR /> </B> Group Policy processing happens in two phases: </P> <UL> <LI> <B> Group Policy Core Processing - </B> where the client enumerates all Group Policies together with all settings that need to be applied. It will connect to a Domain Controller, accessing Active Directory and SYSVOL and gather all the required data in order to process the policies. <BR /> </LI> <LI> <B> Group Policy CSE Processing - </B> Client Side Extensions (CSEs) are responsible for client side policy processing. These CSEs ensure all settings configured in the GPOs will be applied to the workstation or server. </LI> </UL> <BLOCKQUOTE> <P> <STRONG> Note: </STRONG> The Group Policy architecture includes both server and client-side components. The server component includes the user interface (GPEdit.msc, GPMC.msc) that an administrator can use to configure a unique policy. GPEdit.msc is always present even on client SKU's while GPMC.msc and GPME.msc get installed either via RSAT or if the machine is a domain controller. When Group Policy is applied to a user or computer, the client component interprets the policy and makes the appropriate changes to the environment. These are known as Group Policy client-side extensions. </P> </BLOCKQUOTE> <P> See the following post for a reference list for most of the CSEs: <A href="#" target="_blank"> </A> </P> <P> In troubleshooting a given extension's application of policy, the administrator can view the configuration parameters for that extension. These parameters are in the form of registry values. There are two things to keep in mind: </P> <UL> <LI> When configuring GPOs in your Domain you must make sure they have been replicated to all domain controllers, both in AD and SYSVOL. It is important to understand that AD replication is not the same as SYSVOL replication and one can be successful while the other may not. However, if you have a Windows 8 or Windows Server 2012 or later OS, this is easily verified using the Group Policy Management Console (GPMC) and the <A href="#" target="_blank"> status tab </A> for an Organizational Unit (OU). <BR /> </LI> <LI> At a high level, we know that the majority of your GPO settings are just registry keys that need to be delivered and set on a client under the user or machine keys. </LI> </UL> <P> <B> First troubleshooting steps </B> <B> </B> </P> <UL> <LI> Start by using GPResult or the Group Policy Results wizard in GPMC and check which GPOs have been applied. What are the winning GPOs? Are there contradictory settings? Finally, be on the lookout for <B> Loopback Policy Processing </B> that can sometimes deliver unexpected results. </LI> </UL> <BLOCKQUOTE> <P> <STRONG> Note: </STRONG> To have a better understanding of Loopback Policy Processing please review this post: <A href="#" target="_blank"> http:// </A> <A href="#" target="_blank"> </A> </P> </BLOCKQUOTE> <UL> <LI> On the target client, you can run GPResult /v or /h and verify that the GPO is there and listed under “Applied GPOs.” Is it listed? It should look the same as the results from the Group Policy Results wizard in GPMC. If not verify replication and that policy has been recently applied. </LI> </UL> <BLOCKQUOTE> <P> <STRONG> Note: </STRONG> You can always force a group policy update on a client with <B> gpupdate /force. </B> This will require admin privileges for the computer side policies. If you do not have admin rights an old fashioned reboot should force policy to apply. </P> </BLOCKQUOTE> <UL> <LI> If the Group Policy is unexpectedly listed under “Denied GPOs”, then please check the following: </LI> </UL> <BLOCKQUOTE> <P> – If the reason for “Denied GPOs” is empty, then you probably have linked a User Configuration GPO to an OU with computers or the other way around. Link the GPO to the corresponding OU, the one which contains your users. </P> <P> – If the reason for “Denied GPOs” is “Access Denied (Security Filtering)”, then make sure you have the correct objects (Authenticated Users or desired Group) in “Security Filtering” in GPMC. <B> Target objects need at least “Read” and “Apply Group Policy” permissions. </B> </P> <P> – If the reason for “Denied GPOs” is “False WMI Filter”, then make sure you configure the WMI filter accordingly, so that the GPO works with the WMI filter for the desired user and computers. </P> <P> See the following TechNet reference for more on WMI Filters: <A href="#" target="_blank"> </A> <A href="#" target="_blank"> aspx </A> </P> <P> – If the Group Policy isn’t listed in gpresult.exe at all, verify the scope by ensuring that either the user or computer object in Active Directory reside in the OU tree the Group Policy is linked to in GPMC. </P> <P> </P> <P> </P> <P> </P> <P> </P> </BLOCKQUOTE> <P> <B> <BR /> Start Advanced Troubleshooting </B> </P> <UL> <LI> If the problem cannot be identified from the previous steps, then we can enable gpsvc logging. On the client where the GPO Problem occurs follow these steps to enable Group Policy Service debug logging. </LI> </UL> <BLOCKQUOTE> <P> 1. Click <B> Start </B> , click <B> Run </B> , type regedit, and then click <B> OK </B> . <BR /> 2. Locate and then click the following registry subkey: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion <BR /> 3. On the <B> Edit </B> menu, point to <B> New </B> , and then click <B> Key </B> . <BR /> 4. Type Diagnostics, and then press <B> ENTER </B> . <BR /> 5. Right-click the Diagnostics subkey, point to <B> New </B> , and then click <B> DWORD (32-bit) Value </B> . <BR /> 6. Type GPSvcDebugLevel, and then press <B> ENTER </B> . <BR /> 7. Right-click GPSvcDebugLevel, and then click <B> Modify </B> . <BR /> 8. In the Value data box, type <B> 30002 </B> (Hexadecimal), and then click <B> OK </B> . </P> </BLOCKQUOTE> <P> <IMG src="" /> </P> <BLOCKQUOTE> <P> 9. Exit Registry Editor. <BR /> 10. View the Gpsvc.log file in the following folder: %windir%\debug\usermode </P> </BLOCKQUOTE> <BLOCKQUOTE> <P> <B> Note - If the usermode folder does not exist, create it under %windir%\debug. <BR /> </B> <B> If the usermode folder does not exist under %WINDIR%\debug\ the gpsvc.log file will not be created. </B> </P> </BLOCKQUOTE> <UL> <LI> Now, you can either do a “gpupdate /force” to trigger GPO processing or do a restart of the machine in order to get a clean boot application of group policy (Foreground vs Background GPO Processing). </LI> <LI> After that, the log itself should be found under: <B> C:\Windows\Debug\Usermode\gpsvc.log </B> </LI> </UL> <P> An <B> important note </B> for Windows 7/ Windows Server 2008 R2 or older operating systems to consider: On multiprocessor machines, we might have concurrent threads writing to log at the same time. In heavy logging scenarios, one of the writes attempts may fail and we may possibly lose debug log information. <BR /> Concurrent processing is very common with group policy troubleshooting since you usually run "gpupdate /force" without specifying user or machine processing separately. To reduce the chance of lost logging while troubleshooting, initiate machine and user policy processing separately: </P> <UL> <LI> Gpupdate /force /target:computer </LI> <LI> Gpupdate /force /target:user </LI> </UL> <P> <B> <BR /> Analysis - Understanding PID, TID and Dependencies <BR /> </B> Now let's get started with the GPSVC Log analysis! The first thing to understand is the Process Identifier (PID) and Thread Identifier (TID) of a gpsvc log. Here is an example: </P> <P> <I> </I> <I> GPSVC( </I> <B> <I> 31c.328 </I> </B> <I> ) 10:01:56:711 GroupPolicyClientServiceMain </I> </P> <P> What are those? As an example I took “GPSVC( <I> 31c.328 </I> )”, where the first number is 31c, which directly relates to the PID. The second number is 328, which relates to the TID. We know that the 31c doesn’t look like a PID, but that’s because it is in Hexadecimal. By translating it into decimal, you will get the PID of the process for the SVCHOST containing the GPSVC. </P> <P> Then we have a TID, which will differ for every thread the GPClient is working on. One thing to consider: we will have two different threads for Machine and User GPO processing, so make sure you follow the correct one. </P> <P> Example: </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.328) 10:01:56:711 CGPService::Start: InstantiateGPEngine <BR /> GPSVC(31c.328) 10:01:56:726 CGPService::InitializeRPCServer starting RPCServer <BR /> </I> <I> GPSVC(31c.328) 10:01:56:741 CGPService::InitializeRPCServer finished starting RPCServer. status 0x0 <BR /> </I> <I> GPSVC(31c.328) 10:01:56:741 CGPService::Start: CreateGPSessions <BR /> </I> <I> GPSVC(31c.328) 10:01:56:758 Updating the service status to be RUNNING. </I> </P> </BLOCKQUOTE> <P> This shows that the GPService Engine is being started and we can see that it also checks for dependencies (RPCServer) to be started. <BR /> </P> <P> <B> Synchronous vs Asynchronous Processing <BR /> </B> I will not spend a lot of time explaining this because there is a great post from the GP Team out there which explains this very well. This is important to understand because it has a big impact on how settings are applied and when. Look at: <BR /> <A href="#" target="_blank"> http </A> <A href="#" target="_blank"> :// </A> <A href="#" target="_blank"> </A> </P> <P> <B> Synchronous vs. asynchronous processing <BR /> </B> Foreground processing can operate under two different modes—synchronously or asynchronously. The default foreground processing mode for Windows clients since Windows XP has been asynchronous. </P> <P> Asynchronous GP processing does not prevent the user from using their desktop while GP processing completes. For example, when the computer is starting up GP asynchronous processing starts to occur for the computer. In the meantime, the user is presented the Windows logon prompt. Likewise, for asynchronous user processing, the user logs on and is presented with their desktop while GP finishes processing. There is no delay in getting either their logon prompt or their desktop during asynchronous GP processing. When foreground processing is synchronous, the user is not presented with the logon prompt until computer GP processing has completed after a system boot. Likewise the user will not see their desktop at logon until user GP processing completes. This can have the effect of making the user feel like the system is running slow. To summarize, synchronous processing can impact startup time while asynchronous does not. </P> <P> Foreground processing will run synchronously for two reasons: </P> <P> 1)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The administrator forces synchronous processing through a policy setting. This can be done by enabling the <B> Computer Configuration\Policies\Administrative Templates\System\Logon\Always wait for the network at computer startup and logon </B> policy setting. Enabling this setting will make all foreground processing synchronous. This is commonly used for troubleshooting problems with Group Policy processing, but doesn’t always get turned back off again. </P> <BLOCKQUOTE> <P> Note: For more information on fast logon optimization see: <BR /> 305293 Description of the Windows Fast Logon Optimization feature <BR /> <A href="#" target="_blank"> </A> </P> </BLOCKQUOTE> <P> 2)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; A particular CSE requires synchronous foreground processing. There are four CSEs provided by Microsoft that currently require synchronous foreground processing: Software Installation, Folder Redirection, Microsoft Disk Quota and GP Preferences Drive Mapping. If any of these are enabled within one or more GPOs, they will trigger the next foreground processing cycle to run synchronously when they are changed. </P> <P> <B> Action </B> : Avoid synchronous CSEs and don’t force synchronous policy. If usage of synchronous CSEs is necessary, minimize changes to these policy settings. <BR /> </P> <P> <B> Analysis - Starting to read into the gpsvc log <BR /> </B> Starting to read into the gpsvc log </P> <P> First, we identify where the machine settings are starting, because they process first: </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.37c) 10:01:57:101 CStatusMessage::UpdateWinlogonStatusMessage::++ (bMachine: 1) <BR /> </I> <I> GPSVC(31c.37c) 10:01:57:101 Message Status = &lt;Applying computer settings&gt; <BR /> </I> <I> GPSVC(31c.37c) 10:01:57:101 User SID = MACHINE SID <BR /> </I> <I> GPSVC(31c.37c) 10:01:57:101 Setting GPsession state = 1 <BR /> </I> <I> GPSVC(31c.174) 10:01:57:101 </I> <I> CGroupPolicySession::ApplyGroupPolicyForPrincipal </I> <I> ::++ (bTriggered: 0, </I> <I> bConsole: 0 </I> <I> ) </I> </P> </BLOCKQUOTE> <P> The above lines are quite clear, <I> “&lt;Applying computer settings&gt;” and “User SID = MACHINE </I> SID” pointing out we are talking about machine context. From the <I> “bConsole: 0” </I> part, which means “Boolean Console” with a value of 0, as in false, meaning no user – machine processing. </P> <P> </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.174) 10:01:57:101 Waiting for connectivity before applying policies <BR /> </I> <I> GPSVC(31c.174) 10:01:57:116 CGPApplicationService::MachinePolicyStartedWaitingOnNetwork. </I> <I> <BR /> GPSVC(31c.564) 10:01:57:804 NlaGetIntranetCapability returned Not Ready error. Consider it as NOT intranet capable. <BR /> </I> <I> GPSVC(31c.564) 10:01:57:804 There is no connectivity. Waiting for connectivity again... <BR /> </I> <I> GPSVC(31c.564) 10:01:59:319 There is connectivity. <BR /> </I> <I> GPSVC(31c.564) 10:01:59:319 Wait For Connectivity: Succeeded <BR /> </I> <I> GPSVC(31c.174) 10:01:59:319 We have network connectivity... proceeding to apply policy. </I> </P> </BLOCKQUOTE> <P> This shows us that, at this moment in time, the machine does not have connectivity. However, it does state that it is going to wait for connectivity before applying the policies. After two seconds, we can see that it does find connectivity and moves on with GPO processing. <BR /> <I> It is important to understand that there is a default timeout when waiting for connectivity. The default value is 30 seconds, which is configurable. <BR /> </I> </P> <P> <B> Connectivity <BR /> </B> Now let’s check a bad case scenario where there won’t be a connection available and we run into a timeout: </P> <BLOCKQUOTE> <P> <I> GPSVC(324.148) 04:58:34:301 Waiting for connectivity before applying policies <BR /> </I> <I> GPSVC(324.578) 04:59:04:301 CConnectivityWatcher::WaitForConnectivity: Failed WaitForSingleObject. <BR /> </I> <I> GPSVC(324.148) 04:59:04:301 Wait for network connectivity timed out... proceeding to apply policy. <BR /> </I> <I> GPSVC(324.148) 04:59:04:301 CGroupPolicySession::ApplyGroupPolicyForPrincipal::ApplyGroupPolicy (dwFlags: 7). <BR /> </I> <I> GPSVC(324.148) 04:59:04:317 Application complete with bConnectivityFailure = 1. </I> </P> </BLOCKQUOTE> <P> As we can see, after 30 seconds it is failing with a timeout and then proceeds to apply policies. <BR /> Without a network connection there are no policies from the domain and no version checks between cached ones and domain ones that can be made. <BR /> In such cases, you will always encounter “bConnectivityFailure = 1”, which isn’t only typical to a general network connectivity issue, but also for every connectivity problem that the machine encounters, LDAP bind as an example. </P> <P> <B> Slow Link Detection </B> </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.174) 10:01:59:397 GetDomainControllerConnectionInfo: Enabling bandwidth estimate. <BR /> </I> <I> GPSVC(31c.174) 10:01:59:397 Started bandwidth estimation successfully <BR /> </I> <I> GPSVC(31c.174) 10:01:59:976 Estimated bandwidth : DestinationIP = <BR /> </I> <I> GPSVC(31c.174) 10:01:59:976 Estimated bandwidth : SourceIP = <BR /> </I> <I> GPSVC(31c.174) 10:02:00:007 IsSlowLink: Bandwidth Threshold (WINLOGON) = 500. <BR /> </I> <I> GPSVC(31c.174) 10:02:00:007 IsSlowLink: Bandwidth Threshold (SYSTEM) = 500. <BR /> </I> <I> GPSVC(31c.174) 10:02:00:007 IsSlowLink: WWAN Policy (SYSTEM) = 0. <BR /> </I> <B> <I> GPSVC(31c.174) 10:02:00:007 IsSlowLink: Current Bandwidth </I> </B> <B> <I> &gt;= </I> </B> <B> <I> Bandwidth Threshold. </I> </B> </P> </BLOCKQUOTE> <P> Moving further, we can see that a bandwidth estimation is taking place, since Vista, this is done through Network Location Awareness ( <A href="#" target="_blank"> NLA </A> ). </P> <P> Slow Link Detection Backgrounder from our very own " <A href="#" target="_blank"> Group Policy Slow Link Detection using Windows Vista and later </A> " </P> <BLOCKQUOTE> <P> The Group Policy service begins bandwidth estimation after it successfully locates a domain controller. Domain controller location includes the IP address of the domain controller. The first action performed during bandwidth estimation is an authenticated LDAP connect and bind to the domain controller returned during the <A href="#" target="_blank"> DC Locator process </A> . </P> <P> This connection to the domain controller is done under the user's security context and uses Kerberos for authentication. This connection does not support using NTLM. Therefore, this authentication sequence must succeed using Kerberos for Group Policy to continue to process. Once successful, the Group Policy service closes the LDAP connection. The Group Policy service makes an authenticated LDAP connection in computer context when user policy processing is configured in loopback-replace mode. </P> <P> The Group Policy service then determines the network name. The service accomplishes this by using <A href="#" target="_blank"> IPHelper APIs </A> to determine the best network interface in which to communicate with the IP address of the domain controller. Additionally, the domain controller and network name are saved in the client computer's registry for future use. </P> <P> The Group Policy service is ready to determine the status of the link between the client computer and the domain controller. The service asks NLA to report the estimated bandwidth it measured while earlier Group Policy actions occurred. The Group Policy service compares the value returned by NLA to the <B> GroupPolicyMinTransferRate </B> named value stored in Registry. </P> <P> The default minimum transfer rate to measure Group Policy slow link is 500 (Kbps). The link between the domain controller and the client is slow if the estimated bandwidth returned by NLA is lower than the value stored in the registry. The policy value has precedence over the preference value if both values appear in the registry. After successfully determining the link state (fast or slow—no errors), then the Group Policy service writes the slow link status into the Group Policy history, which is stored in the registry. The named value is <B> IsSlowLink </B> . </P> <P> If the Group Policy service encounters an error, it read the last recorded value from the history key and uses that true or false value for the slow link status. </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BLOCKQUOTE> <P> There is updated client-side behavior with Windows 8.1 and later: <BR /> <A href="#" target="_blank"> What's New in Group Policy in Windows Server - Policy Caching </A> </P> <BLOCKQUOTE> <P> In <B> Windows Server 2012 R2 and Windows 8.1 </B> , when Group Policy gets the latest version of a policy from the domain controller, it writes that policy to a local store. Then if Group Policy is running in synchronous mode the next time the computer reboots, it reads the most recently downloaded version of the policy from the local store, instead of downloading it from the network. This reduces the time it takes to process the policy. Consequently, the boot time is shorter in synchronous mode. This is especially important if you have a latent connection to the domain controller, for example, with DirectAccess or for computers that are off premises. This behavior is controllable by a new policy called Configure Group Policy Caching. </P> <P> - The updated slow link detection only takes place during synchronous policy processing. It “pings” the Domain Controller with calling <B> DsGetDcName </B> and measures the duration. </P> <P> - By default, the <I> Configure Group Policy Caching </I> group policy setting is set to Not Configured. The feature will be enabled by default and using the default values for slow link detection (500ms) and time-out for communicating with a Domain Controller (5000ms) to determine whether it is on the network, if the below conditions are met: </P> <P> o The Turn off background refresh of Group Policy policy setting is Not Configured or Disabled. </P> <P> o The Configure Group Policy slow link detection policy setting is Not Configured, or, when Enabled, contains a value for Connection speed (Kbps) that is not outlandish (500 is the default value). </P> <P> o The Set Group Policy refresh interval for computers is Not Configured or, when Enabled, contains values for Minutes that are not outlandish (90 and 30 at the default values). </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BLOCKQUOTE> <P> <B> Order of processing settings </B> <BR /> Next on the agenda is retrieving GPOs from the domain. Here we have Group Policy processing and precedence, Group Policy objects that apply to a user (or computer) do not have the same precedence. <BR /> Settings that are applied later can override settings that are applied earlier. The policies are applied in the hierarchy --&gt; Local machine, Sites, Domains and Organizational Units (LSDOU). <BR /> For nested organizational units, GPOs linked to parent organizational units are applied before GPOs linked to child organizational units are applied. </P> <P> <B> Note </B> : <B> </B> The order in which GPOs are processed is significant because when policy is applied, it overwrites policy that was applied earlier. </P> <P> There are of course some exceptions to the rule: </P> <UL> <LI> A GPO link may be <B> enforced </B> , or <B> disabled </B> , or both. </LI> <LI> A GPO may have its user settings disabled, its computer settings disabled, or all settings disabled. </LI> <LI> An organizational unit or a domain may have <B> Block Inheritance </B> set. </LI> <LI> Loopback may be enabled. </LI> </UL> <BLOCKQUOTE> <P> For a better understanding regarding these, please have a look in the following TechNet article: <A href="#" target="_blank"> http:// </A> <A href="#" target="_blank"> </A> </P> </BLOCKQUOTE> <P> <B> How does the order of processing look in a gpsvc log <BR /> </B> In the gpsvc log you will notice that the ldap search is done starting at the OU level and up to the site level. </P> <BLOCKQUOTE> <P> "The Group Policy service uses the distinguished name of the computer or user to determine the list of OUs and the domain it must search for group policy objects. The Group Policy service builds this list by analyzing the distinguished name from left to right. The service scans the name looking for each instance of OU= in the name. The service then copies the distinguished name to a list, which is used later. The Group Policy service continues to scan the distinguished name for OUs until it encounters the first instance of DC=. At this point, the Group Policy service has found the domain name, finally it searches for policies at site level." </P> </BLOCKQUOTE> <P> As you have probably noticed in our example, we only have two GPOs, one at the OU level and one at the Domain level. </P> <P> The searches are done using the policies GUID and not their name, the same way you would find them in Sysvol, not by name but by their policy GUID. <BR /> It is always a best practice to be aware of the policy name <I> and </I> its GUID, thus making it easier to work with, while troubleshooting. </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.174) 10:01:59:413 GetGPOInfo: Entering... <BR /> </I> <I> GPSVC(31c.174) 10:01:59:413 GetMachineToken: Looping for authentication again. </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 SearchDSObject: </I> <I> Searching &lt;OU=Workstations,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 SearchDSObject: </I> <I> Found GPO(s) </I> <I> : &lt;[LDAP://cn={CC02524C-727C-4816-A298- <BR /> 63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab; </I> <I> <STRONG> 0 </STRONG> </I> <I> ]&gt; </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 ProcessGPO(Machine): ============================== </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 ProcessGPO(Machine): Deferring search for LDAP://cn={CC02524C-727C-4816-A298-63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 SearchDSObject: </I> <I> Searching &lt;DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 SearchDSObject: </I> <I> Found GPO(s) </I> <I> : &lt;[LDAP://CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab; </I> <I> <STRONG> 0 </STRONG> </I> <I> ]&gt; </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 ProcessGPO(Machine): ============================== </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 ProcessGPO(Machine): Deferring search for LDAP://CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab <BR /> </I> <I> GPSVC(31c.174) 10:01:59:522 SearchDSObject: </I> <I> Searching &lt;CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.174) 10:01:59:522 SearchDSObject: No GPO(s) for this object. </I> </P> </BLOCKQUOTE> <P> You can see if the policy is enabled, disable or enforced here: </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.174) 10:01:59:413 SearchDSObject: </I> <I> Searching &lt;OU=Workstations,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.174) 10:01:59:413 SearchDSObject: </I> <I> Found GPO(s) </I> <I> : &lt;[LDAP://cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab; </I> <I> <STRONG> 0 </STRONG> </I> <I> ]&gt; </I> </P> </BLOCKQUOTE> <P> Note the <B> 0 </B> at the end of the ldap query, this is the default setting. If the value were <B> 1 </B> instead of <B> 0 </B> it would mean the policy is set to disabled. In other words, a value of 1 means the policy is linked to that particular OU, domain, or site level but is disabled. If the value is set to 2 then it would mean that the policy has been set to “Enforced.” </P> <P> A setting of “Enforced” means that if two separate GPOs have the same setting defined, but hold different values, the one that is set to “Enforced” will win and will be applied to the client. If a policy is set to “Enforced” at an OU/domain level and an OU below that is set to block inheritance, then the policy set for “Enforced” will still apply. You cannot block a policy from applying if “Enforced” has been set. </P> <P> Example of an enforced policy: </P> <BLOCKQUOTE> <P> <I> GPSVC(328.7fc) 07:01:14:334 SearchDSObject: Searching &lt;OU=Workstations,DC=contoso,DC=lab&gt; <BR /> </I> <I> GPSVC(328.7fc) 07:01:14:334 SearchDSObject: Found GPO(s): &lt;[LDAP://cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab; </I> <I> <STRONG> 2 </STRONG> </I> <I> ]&gt; </I> <BR /> <I> GPSVC(328.7fc) 07:01:14:334 </I> <I> AllocGpLink: GPO cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab has enforced link. </I> <BR /> <I> GPSVC(328.7fc) 07:01:14:334 ProcessGPO(Machine): ============================== </I> </P> </BLOCKQUOTE> <P> Now let‘s move down the log and we‘ll find the next step where the policies are being processed: </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): ============================== <BR /> </I> <I> GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): </I> <I> Searching &lt;CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): </I> <I> Machine has access to this GPO. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): Found common name of: &lt;{31B2F340-016D-11D2-945F-00C04FB984F9}&gt; </I> <I> <BR /> GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): </I> <I> </I> <I> GPO passes the filter check. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): Found functionality version of: 2 </I> <BR /> <I> GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): </I> <I> Found file system path of: \\contoso.lab\sysvol\contoso.lab\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): </I> <I> Found display name of: &lt;Default Domain Policy&gt; </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): </I> <I> Found machine version of: GPC is 17, GPT is 17 </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found flags of: 0 </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{53D6AB1B-2488-11D1-A28C-00C04FB94F17}{53D6AB1D-2488-11D1-A28C-00C04FB94F17}][{827D319E-6EAC-11D2-A4EA-00C04F79F83A}{803E14A0-B4FB-11D0-A0D0-00A0C90F574B}][{B1BE8D72-6EAC-11D2-A4EA-00C04F79F83A}{53D6AB1B-2488-11D1-A28C-00C04FB94F17}{53D6AB1D-2488-11D1-A28C-00C04FB94F17}] </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): ============================== </I> <BR /> <I> </I> </P> </BLOCKQUOTE> <P> <I> </I> </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): ============================== </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): </I> <I> Searching &lt;cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): </I> <I> Machine has access to this GPO. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found common name of: &lt;{CC02524C-727C-4816-A298-D63D12E68C0F}&gt; </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): </I> <I> GPO passes the filter check. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found functionality version of: 2 </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): </I> <I> Found file system path of: \\contoso.lab\SysVol\contoso.lab\Policies\{CC02524C-727C-4816-A298-D63D12E68C0F} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): </I> <I> Found display name of: &lt;GPO Guide test&gt; </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): </I> <I> Found machine version of: GPC is 1, GPT is 1 </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found flags of: 0 </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{D02B1F72-3407-48AE-BA88-E8213C6761F1}] </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): ============================== </I> </P> </BLOCKQUOTE> <P> First, we find the path where the GPO is stored in AD. As you can see, the GPO is still being represented by the GPO GUID and not its name: <B> <I> Searching &lt;cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab&gt; </I> </B> <BR /> After that, it checks to see if the machine has access to the policy, if yes then the computer can apply the policy; if it does not have access, then he cannot apply it. As per example: <B> <I> Machine has access to this GPO. </I> </B> <B> </B> </P> <P> Moving on, if a policy has a WMI filter being applied, it will be verified in order to see if the filter matches the current machine\user or not. <BR /> The WMI filter can be found in AD. If you are using GPMC, then this can be found in the right hand pane at the very bottom box, after highlighting the policy. From our example: <B> <I> GPO passes the filter check. </I> </B> </P> <P> Functionality version has to be a 2 for a Windows 2003 or later OS to apply the policy. From our example: <B> <I> Found functionality version of: 2 </I> </B> <BR /> A search in Sysvol for the GPO is also being executed, as explained in the beginning, both AD and Sysvol must be aware of the GPO and its settings. From our example: <B> <I> Found file system path of: &lt;\\contoso.lab\SysVol\contoso.lab\Policies\{CC02524C-727C-4816-A298-D63D12E68C0F}&gt; </I> </B> </P> <P> The next part is where we check the GPC (Group Policy Container, AD) and the GPT (Group Policy Template, Sysvol) for the version numbers. We check the version numbers to determine if the policy has changed since the last time it was applied. If the version numbers are different (GPC different than GPT) then we either have an AD replication or File replication problem. From our example we can see that there’s a match between those two: <B> <I> Found machine version of: GPC is 1, GPT is 1 </I> </B> </P> <P> The extensions in the next line refers to the CSE (client-side extensions GUIDs) and will vary from policy to policy. As explained, they are the ones in charge at the client side to carry on our settings: From our example: <B> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{D02B1F72-3407-48AE-BA88-E8213C6761F1}] </I> </B> <I> </I> </P> <P> Let‘s have a look at an example with a WMI Filter being used, which does not suit our current system: </P> <BLOCKQUOTE> <P> <I> GPSVC(328.7fc) 08:04:32:803 ProcessGPO(Machine): ============================== </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:803 ProcessGPO(Machine): Searching &lt;cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:803 ProcessGPO(Machine): </I> <I> Machine has access to this GPO. </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:803 ProcessGPO(Machine): Found common name of: &lt;{CC02524C-727C-4816-A298-D63D12E68C0F}&gt; </I> <I> GPSVC(328.7fc) 08:04:32:803 </I> <I> FilterCheck: Found WMI Filter id of: &lt;[contoso.lab;{CD718707-ACBD-4AD7-8130-05D61C897783};0]&gt; </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:913 ProcessGPO(Machine): </I> <I> The GPO does not pass the filter check and so will not be applied. </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:913 ProcessGPO(Machine): Found functionality version of: 2 </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:913 ProcessGPO(Machine): Found file system path of: \\contoso.lab\SysVol\contoso.lab\Policies\{CC02524C-727C-4816-A298-D63D12E68C0F} </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): Found display name of: &lt;GPO Guide test&gt; </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): Found machine version of: GPC is 1, GPT is 1 </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): Found flags of: 0 </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{D02B1F72-3407-48AE-BA88-E8213C6761F1}] </I> <BR /> <I> GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): ============================== </I> </P> </BLOCKQUOTE> <P> In this scenario a WMI filter was used, which specifies that the used OS has to be Windows XP, so in order to apply the GPO the system OS has to match our filter. As our OS is Windows 2012R2, the filter does not match and so the GPO will not apply. </P> <P> Now we come to the part where we process CSE’s for particular settings, such as Folder Redirection, Disk Quota, etc. If the particular extension is not being used then you can simply ignore this section. </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.174) 10:02:00:038 ProcessGPOs(Machine): Get 2 GPOs to process. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {35378EAC-683F-11D2-A89A-00C04FBBCFA2} <BR /> </I> <I> GPSVC(31c.174) 10:02:00:038 ReadStatus: Read Extension's Previous status successfully. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {0ACDD40C-75AC-47ab-BAA0-BF6DE7E7FE63} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {0E28E245-9368-4853-AD84-6DA3BA35BB75} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {16be69fa-4209-4250-88cb-716cf41954e0} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {17D89FEC-5C44-4972-B12D-241CAEF74509} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {1A6364EB-776B-4120-ADE1-B63A406A76B5} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {25537BA6-77A8-11D2-9B6C-0000F8080861} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {3610eda5-77ef-11d2-8dc5-00c04fa31a66} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {3A0DBA37-F8B2-4356-83DE-3E90BD5C261F} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {426031c0-0b47-4852-b0ca-ac3d37bfcb39} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {42B5FAAE-6536-11d2-AE5A-0000F87571E3} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {4bcd6cde-777b-48b6-9804-43568e23545d} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {4CFB60C1-FAA6-47f1-89AA-0B18730C9FD3} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {4D2F9B6F-1E52-4711-A382-6A8B1A003DE6} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {5794DAFD-BE60-433f-88A2-1A31939AC01F} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {6232C319-91AC-4931-9385-E70C2B099F0E} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {6A4C88C6-C502-4f74-8F60-2CB23EDC24E2} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {7150F9BF-48AD-4da4-A49C-29EF4A8369BA} </I> <BR /> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {728EE579-943C-4519-9EF7-AB56765798ED} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {74EE6C03-5363-4554-B161-627540339CAB} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {7B849a69-220F-451E-B3FE-2CB811AF94AE} </I> <I> GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {827D319E-6EAC-11D2-A4EA-00C04F79F83A} </I> </P> </BLOCKQUOTE> <P> <BR /> Note: </P> <UL> <LI> You can always do a search for each of these GUIDs on <A href="#" target="_blank"> MSDN </A> and you should be able to find their proper names. </LI> <LI> At the end of the machine GPO thread, we can also see the Foreground processing that we talked about in the beginning. We can see that the Foreground processing was Synchronous and that the next one will be Synchronous as well. </LI> <LI> The end of the machine GPO processing thread comes to an end and we can see that it was completed with a <B> <I> bConnectivityFailure = 0. </I> </B> </LI> </UL> <BLOCKQUOTE> <P> <I> GPSVC(31c.174) 10:02:00:397 </I> <I> ProcessGPOs(Machine): SKU is SYNC: </I> <I> Mode: 1, Reason: 7 </I> <BR /> <I> GPSVC(31c.174) 10:02:00:397 </I> <I> gpGetFgPolicyRefreshInfo (Machine): Mode: Synchronous </I> <I> , Reason: 7 </I> <BR /> <I> GPSVC(31c.174) 10:02:00:397 gpSetFgPolicyRefreshInfo (bPrev: 1, szUserSid: Machine, info.mode: Synchronous) </I> <BR /> <I> GPSVC(31c.174) 10:02:00:397 </I> <I> SetFgRefreshInfo: Previous Machine Fg policy Synchronous, Reason: SKU. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:397 gpSetFgPolicyRefreshInfo (bPrev: 0, szUserSid: Machine, info.mode: Synchronous) </I> <BR /> <I> GPSVC(31c.174) 10:02:00:397 </I> <I> SetFgRefreshInfo: Next Machine Fg policy Synchronous, Reason: SKU. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:397 ProcessGPOs(Machine): Policies changed - checking if UBPM trigger events need to be fired </I> <BR /> <I> GPSVC(31c.174) 10:02:00:397 CheckAndFireGPTriggerEvent: Fired Policy present UBPM trigger event for Machine. </I> <BR /> <I> GPSVC(31c.174) 10:02:00:397 </I> <B> <I> Application complete with bConnectivityFailure = 0. </I> </B> <B> <I> </I> </B> </P> <P> <B> </B> </P> </BLOCKQUOTE> <P> <B> User GPO Thread </B> </P> <P> This next part of the GPO log is dedicated to the user thread. </P> <P> While the machine thread had the TID (31c. <B> 174 </B> ) the user thread has (31c. <B> b8 </B> ) which you can notice when the thread actually starts. You can see that the user SID is found. <BR /> Also, notice this time the “ <I> bConsole: </I> <B> <I> 1 </I> </B> ” at the end instead of 0 which we had for the machine. </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.704) 10:02:47:147 CGPEventSubSystem:: </I> <I> GroupPolicyOnLogon </I> <I> ::++ (SessionId: 1) <BR /> </I> <I> GPSVC(31c.704) 10:02:47:147 CGPApplicationService:: </I> <I> UserLogonEvent </I> <I> ::++ (SessionId: 1, ServiceRestart: 0) </I> <BR /> <I> GPSVC(31c.704) 10:02:47:147 CGPApplicationService:: </I> <I> CheckAndCreateCriticalPolicySection. </I> <BR /> <I> GPSVC(31c.704) 10:02:47:147 </I> <I> User SID = &lt;S-1-5-21-646618010-1986442393-1057151281-1103&gt; </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 </I> <I> CGroupPolicySession::ApplyGroupPolicyForPrincipal </I> <I> ::++ (bTriggered: 0, bConsole: </I> <B> <I> 1 </I> </B> <I> ) </I> </P> </BLOCKQUOTE> <P> <I> </I> </P> <P> <I> </I> </P> <P> You can see that it does the network check again and that it is also prepared to wait for network. </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.b8) 10:02:47:147 CGPApplicationService:: </I> <I> GetTimeToWaitOnNetwork. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 CGPMachineStartupConnectivity::CalculateWaitTimeoutFromHistory: Average is 3334. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 CGPMachineStartupConnectivity::CalculateWaitTimeoutFromHistory: Current is 2203. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 CGPMachineStartupConnectivity::CalculateWaitTimeoutFromHistory: Taking min of 6668 and 120000. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 CGPApplicationService:: </I> <I> GetStartTimeForNetworkWait. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 </I> <I> StartTime For network wait: 3750ms </I> <I> </I> </P> </BLOCKQUOTE> <P> In this case it decides to wait for network with timeout 0 ms because it already has network connectivity and so moves on to processing GPOs. </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.b8) 10:02:47:147 UserPolicy: Waiting for machine policy wait for network event with timeout 0 ms </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 CGroupPolicySession::ApplyGroupPolicyForPrincipal::ApplyGroupPolicy (dwFlags: 38). </I> </P> </BLOCKQUOTE> <P> The next part remains the same as for the machine thread, it searches and returns networks found, number of interfaces and bandwidth check. </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.b8) 10:02:47:147 </I> <I> NlaQueryNetSignatures returned 1 networks </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 NSI Information (Network GUID) : {1F777393-0B42-11E3-80AD-806E6F6E6963} </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 </I> <I> # of interfaces : 1 </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:147 Interface ID: {9869CFDA-7F10-4B3F-B97A-56580E30CED7} </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:163 </I> <I> GetDomainControllerConnectionInfo: Enabling bandwidth estimate. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:475 </I> <I> Started bandwidth estimation successfully </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 </I> <I> IsSlowLink: Current Bandwidth </I> <B> <I> &gt;= </I> </B> <I> Bandwidth Threshold. </I> </P> </BLOCKQUOTE> <P> The ldap query for the GPOs is done in the same manner as for the machine thread: </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.b8) 10:02:47:490 GetGPOInfo: Entering... </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 </I> <I> SearchDSObject: Searching &lt;OU=Admin Users,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 </I> <I> SearchDSObject: Found GPO(s): &lt;[LDAP://cn={CCF581E3-E2ED-441F-B932-B78A3DFAE09B},cn=policies,cn=system,DC=contoso,DC=lab; </I> <I> 0 </I> <I> ]&gt; </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 ProcessGPO(User): ============================== </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 ProcessGPO(User): Deferring search for LDAP://cn={CCF581E3-E2ED-441F-B932-B78A3DFAE09B},cn=policies,cn=system,DC=contoso,DC=lab </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 </I> <I> SearchDSObject: Searching &lt;DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 </I> <I> SearchDSObject: Found GPO(s): &lt;[LDAP://CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab; </I> <I> 0 </I> <I> ]&gt; </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 ProcessGPO(User): ============================== </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 ProcessGPO(User): Deferring search for LDAP://CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 </I> <I> SearchDSObject: Searching &lt;CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 </I> <I> SearchDSObject: No GPO(s) for this object. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 EvaluateDeferredGPOs: Searching for GPOs in cn=policies,cn=system,DC=contoso,DC=lab </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:490 EvaluateDeferredGPOs: Adding filters (&amp;(!(flags:1.2.840.113556.1.4.803:=1))(gPCUserExtensionNames=[*])((|(distinguishedName=CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab)(distinguishedName=cn={CCF581E3-E2ED-441F-B932-B78A3DFAE09B},cn=policies,cn=system,DC=contoso,DC=lab)))) </I> </P> </BLOCKQUOTE> <P> We can see the GPOs are processed exactly as explained in the machine part, while the difference is that the GPO has to be available for the user this time and not the machine. The important thing in the following example is that the Default Domain Policy (we know it is the Default Domain Policy because it has a hardcoded GUID <B> <I> {31B2F340-016D-11D2-945F-00C04FB984F9 </I> </B> } which will be that same in every Domain) contains no extensions for the user side, thus being reported to us “ <I> has no extensions </I> ”: </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.b8) 10:02:47:851 EvalList: Object &lt;CN= </I> <I> {31B2F340-016D-11D2-945F-00C04FB984F9}, </I> <I> CN=Policies,CN=System,DC=contoso,DC=lab&gt; cannot be accessed/is disabled/ </I> <I> or has no extensions </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): ============================== </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): </I> <I> Searching &lt;cn={CCF581E3-E2ED-441F-B932-B78A3DFAE09B},cn=policies,cn=system,DC=contoso,DC=lab&gt; </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): </I> <I> User has access to this GPO. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found common name of: &lt;{CCF581E3-E2ED-441F-B932-B78A3DFAE09B}&gt; </I> <I> <BR /> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): </I> <I> GPO passes the filter check. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found functionality version of: 2 </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): </I> <I> Found file system path of: \\contoso.lab\SysVol\contoso.lab\Policies\{CCF581E3-E2ED-441F-B932-B78A3DFAE09B} </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): </I> <I> Found display name of: &lt;GPO Guide Test Admin Users&gt; </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): </I> <I> Found user version of: GPC is 3, GPT is 3 </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found flags of: 0 </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{D02B1F73-3407-48AE-BA88-E8213C6761F1}] </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): ============================== </I> </P> </BLOCKQUOTE> <P> After that, our policy settings are processed directly into the registry by the CSE: </P> <BLOCKQUOTE> <P> <I> GPSVC(318.7ac) 02:02:02:187 </I> <I> SetRegistryValue: NoWindowsMarketplace =&gt; 1 [OK] </I> <BR /> <I> GPSVC(318.7ac) 02:02:02:187 </I> <I> SetRegistryValue: ScreenSaveActive =&gt; 0 [OK] </I> </P> </BLOCKQUOTE> <P> While moving on to process CSE’s for particular settings, such as Folder Redirection, Disk Quota, etc., exactly as it was done for the machine thread. </P> <P> Here it is the same as the machine thread, where the user thread is also finished with a <I> bConnectivityFailure = 0 </I> and everything was applied as expected. </P> <BLOCKQUOTE> <P> <I> GPSVC(31c.b8) 10:02:47:912 </I> <I> User logged in on active session </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:912 ApplyGroupPolicy: Getting ready to create background thread GPOThread. </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:912 CGroupPolicySession::ApplyGroupPolicyForPrincipal Setting m_pPolicyInfoReadyEvent </I> <BR /> <I> GPSVC(31c.b8) 10:02:47:912 </I> <I> Application complete with </I> <I> bConnectivityFailure = 0 </I> <I> . </I> </P> </BLOCKQUOTE> <P> In the gpsvc log, you will always have a confirmation that the “ <I> problematic </I> ” GPO was indeed processed or not; this is to make sure that the GPO was read and applied from the domain. The registry values that the GPO contains should be applied on the client side by the CSEs, so if you see a GPO in gpsvc getting applied but the desired setting isn’t applied on the client side, it is a good idea to check the registry values yourself by using “regedit” in order to ensure they have been properly set. </P> <P> If these registry values are getting changed <I> after </I> they have been applied, a good tool provided by Microsoft to further troubleshoot this is <A href="#" target="_blank"> Process Monitor </A> , which can be used to follow those certain registry settings and see who’s changing them. </P> <P> There are definitely all sort of problem scenarios that I haven’t covered with this guide. This is meant as a starter guide for you to have an idea how to follow up if your domain GPOs aren’t getting applied and you want to use our gpsvc log to troubleshoot this. </P> <P> Finally, as Client Side Extensions (CSE) play a major role for GPO settings distribution, here is a list for those of you that want to go deeper with CSE Logging, which you can enable in order to gather more information about the CSE state: </P> <P> <B> Scripts and Administrative Templates CSE Debug Logging (gptext.dll) </B> HKLM\Software\Microsoft\WindowsNT\CurrentVersion\Winlogon </P> <BLOCKQUOTE> <P> ValueName: GPTextDebugLevel <BR /> ValueType: REG_DWORD <BR /> Value Data: 0x00010002 <BR /> Options: 0x00000001 = DL_Normal <BR /> 0x00000002 = DL_Verbose <BR /> 0x00010000 = DL_Logfile <BR /> 0x00020000 = DL_Debugger </P> </BLOCKQUOTE> <P> Log File: C:\WINNT\debug\usermode\gptext.log <BR /> </P> <P> <B> Security CSE WINLOGON Debug Logging (scecli.dll) </B> <BR /> KB article: <A href="#" target="_blank"> 245422 </A> How to Enable Logging for Security Configuration Client Processing in Windows 2000 </P> <P> HKLM\Software\Microsoft\WindowsNT\CurrentVersion\WinLogon\GPExtensions\{827D319E-6EAC-11D2- A4EA-00C04F79F83A </P> <BLOCKQUOTE> <P> ValueName: ExtensionDebugLevel <BR /> ValueType: REG_DWORD <BR /> Value Data: 2 <BR /> Options: 0 = Log Nothing <BR /> 1 = Log only errors <BR /> 2 = Log all transactions </P> </BLOCKQUOTE> <P> Log File: C:\WINNT\security\logs\winlogon.log <BR /> </P> <P> <B> Folder Redirection CSE Debug Logging (fdeploy.dll) <BR /> </B> HKLM\Software\Microsoft\WindowsNT\CurrentVersion\Diagnostics </P> <BLOCKQUOTE> <P> ValueName: fdeployDebugLevel <BR /> ValueType: REG_DWORD <BR /> Value Data: 0x0f </P> </BLOCKQUOTE> <P> Log File: C:\WINNT\debug\usermode\fdeploy.log <BR /> </P> <P> <B> Offline Files CSE Debug Logging (cscui.dll) <BR /> </B> KB article: <A href="#" target="_blank"> 225516 </A> How to Enable the Offline Files Notifications Window in Windows 2000 <BR /> </P> <P> <B> Software Installation CSE Verbose logging (appmgmts.dll) <BR /> </B> KB article: <A href="#" target="_blank"> 246509 </A> Troubleshooting Program Deployment by Using Verbose Logging <BR /> HKLM\Software\Microsoft\WindowsNT\CurrentVersion\Diagnostics </P> <BLOCKQUOTE> <P> ValueName: AppmgmtDebugLevel <BR /> ValueType: REG_DWORD <BR /> Value Data: 0x9B or 0x4B </P> </BLOCKQUOTE> <P> Log File: C:\WINNT\debug\usermode\appmgmt.log <BR /> </P> <P> <B> Software Installation CSE Windows Installer Verbose logging <BR /> </B> KB article: <A href="#" target="_blank"> 314852 </A> How to enable Windows Installer logging </P> <P> HKLM\Software\Policies\Microsoft\Windows\Installer </P> <BLOCKQUOTE> <P> ValueName: Logging <BR /> Value Type: Reg_SZ <BR /> Value Data: voicewarmup </P> </BLOCKQUOTE> <P> Log File: C:\WINNT\temp\MSI*.log </P> <P> <B> Desktop Standard CSE Debug Logging <BR /> </B> KB article: <A href="#" target="_blank"> 931066 </A> How to enable tracing for client-side extensions in PolicyMaker <BR /> </P> <P> <B> GPEDIT - Group Policy Editor Console Debug Logging <BR /> </B> TechNet article: <A href="#" target="_blank"> Enabling Logging for Group Policy Editor </A> <BR /> HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon </P> <BLOCKQUOTE> <P> Value Name: GPEditDebugLevel <BR /> Value Type: REG_DWORD <BR /> Value Data: 0x10002 </P> </BLOCKQUOTE> <P> Log File: %windir%\debug\usermode\gpedit.log <BR /> </P> <P> <B> GPMC - Group Policy Management Console Debug Logging <BR /> </B> TechNet article: <A href="#" target="_blank"> Enable Logging for Group Policy Management Console </A> <BR /> HKLM\Software\Microsoft\Windows NT\CurrentVersion\Diagnostics </P> <BLOCKQUOTE> <P> Value Name: GPMgmtTraceLevel <BR /> Value Type: REG_DWORD <BR /> Value Data: 2 </P> </BLOCKQUOTE> <P> HKLM\Software\Microsoft\Windows NT\CurrentVersion\Diagnostics </P> <BLOCKQUOTE> <P> Value Name: GPMgmtLogFileOnly <BR /> Value Type: REG_DWORD <BR /> Value Data: 1 </P> </BLOCKQUOTE> <P> Log File: C:\Documents and Settings\&lt;user&gt;\Local Settings\Temp\gpmgmt.log </P> <P> </P> <P> <B> RSOP - Resultant Set of Policies Debug Logging <BR /> </B> Debug Logging for RSoP Procedures: <BR /> HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon </P> <BLOCKQUOTE> <P> Value Name: RsopDebugLevel <BR /> Value Type: REG_DWORD <BR /> Value Data: 0x00010004 </P> </BLOCKQUOTE> <P> <BR /> Log File: %windir%\system32\debug\USERMODE\GPDAS.LOG <BR /> </P> <P> <B> WMI Debug Logging </B> <BR /> ASKPERF blog post: <A href="#" target="_blank"> WMI Debug Logging </A> <BR /> </P> <P> I hope this was interesting and shed some light on how to start analyzing the gpsvc log. </P> <P> Thank you, </P> <P> David Ani </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 03:02:32 GMT JustinTurner 2019-04-05T03:02:32Z Migrating your Certification Authority Hashing Algorithm from SHA1 to SHA2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Apr 01, 2015 </STRONG> <BR /> <P> </P> <BR /> <P> Hey all, Rob Greene here again. Well it’s been a very long while since I have written anything for the AskDS blog. I’ve been heads down supporting all the new cool technology from Microsoft. </P> <BR /> <P> I wanted to see if I could head off some cases coming our way with regard to the whole SHA1 deprecation that seems to be getting talked about on all kinds of PKI related websites. I am not discussing anything new about Microsoft SHA1 deprecation plans. If you want information on this topic please look at the following link: SHA1 Deprecation Policy - <A href="" target="_blank"> </A> </P> <BR /> <P> It does appears that some Web browsers are on a faster timeline to not allow SHA1 certificates as Goggle Chrome has outlined in this blog: <A href="#" target="_blank"> </A> </P> <BR /> <P> So as you would suspect, we are starting to get a few calls from customers wanting to know how to migrate their current Microsoft PKI hierarchy to support SHA2 algorithms. We actually do have a TechNet article explaining the process. </P> <BR /> <P> Before you go through this process of updating your current PKI hierarchy, I have one question for you. Are you sure that all operating systems, devices, and applications that currently use internal certificates in your enterprise actually support SHA2 algorithms? </P> <BR /> <P> How about that ancient Java based application running on the 20 year old IBM AS400 that basically runs the backbone of your corporate data? Does the AS400 / Java version running on it support SHA2 certificates so that it can do LDAPS calls to the domain controller for user authentication? </P> <BR /> <P> What about the old version of Apache or Tomcat web servers you have running? Do they support SHA2 certificates for the websites they host? </P> <BR /> <P> You are basically going to have to test every application within your environment to make sure that they will be able to do certificate chaining and revocation checking against certificates and CRLs that have been signed using one of the SHA2 algorithms. Heck, you might remember we have the following hotfix’s so that Windows XP SP3 and Windows Server 2003 SP2 can properly chain a certificate that contains certification authorities that were signed using SHA2 algorithms. </P> <BR /> <P> Windows Server 2003 and Windows XP clients cannot obtain certificates from a Windows Server 2008-based certification authority (CA) if the CA is configured to use SHA2 256 or higher encryption </P> <BR /> <P> <A href="#" target="_blank"> </A> </P> <BR /> <P> Applications that use the Cryptography API cannot validate an X.509 certificate in Windows Server 2003 </P> <BR /> <P> <A href="#" target="_blank"> </A> </P> <BR /> <P> Inevitably we get the question “What would you recommend Microsoft?” Well that is really a loaded question since we have no idea what is in your vast enterprise environment outside of Microsoft operating systems and applications. When this question comes up the only thing that we can say is that any currently supported Microsoft operating system or application should have no problems supporting a certificate chain or CRL signed using SHA2 algorithms. So if that is the only thing in your environment you could easily follow the migration steps and be done. However, if you are using a Microsoft operating system outside of main stream support, it most likely does not support SHA2 algorithms. I actually had a customer ask if Windows CE supported SHA2; which I had to tell him it does not. (Who knew you guys still ran those things in your environments!) </P> <BR /> <P> If you have any 3 <SUP> rd </SUP> party applications or operating systems, then I would suggest you look on the vendor’s website or contact their technical support to get a definitive answer about support for SHA2 algorithms. If you are using a product that has no support then you might need to stand up a SHA2 certificate chain in a lab environment and test the product. Once a problem has been identified you can work with that vendor to find out if they have a new version of the application and/or operating system that supports SHA2 or find out when they plan on supporting it. </P> <BR /> <P> If you do end up needing to support some applications that currently do not support SHA2 algorithms, I would suggest that you look into bringing up a new PKI hierarchy alongside your current SHA1 PKI hierarchy. Slowly begin migrating SHA2 supported applications and operating systems over to the new hierarchy and only allow applications and operating systems that support SHA1 on the existing PKI hierarchy. </P> <BR /> <P> Nah, I want to do the migration! </P> <BR /> <P> So if you made it down to this part of the blog you either actually want to do the migration or curiosity has definitely got the better of you, so let’s get to it. The TechNet article below discusses how to migrate your private key from using a Cryptographic Service Provider (CSP) which only supports SHA1 to a Key Storage Provider (KSP) that supports SHA2 algorithms: </P> <BR /> <P> Migrating a Certification Authority Key from a Cryptographic Service Provider (CSP) to a Key Storage Provider (KSP) - <A href="#" target="_blank"> </A> </P> <BR /> <P> In addition to this process, I would first recommend that you export all the private and public key pairs that your Certification Authority has before going through with the steps outlined in the above TechNet article. The article seems to assume you have already taken good backups of the Certification Authorities private keys and public certificates. </P> <BR /> <P> Keep in mind that if your Certification Authority has been in production for any length of time you have more than likely renewed the Certification Authority certificate at least once in its lifetime. You can quickly find out by looking at the properties of the CA on the general tab. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> When you change the hashing algorithm over to a SHA2 algorithm you are going to have to migrate all CA certificates to use the newer Key Storage Providers if you are currently using Cryptographic Service Providers. If you are NOT using the Microsoft Providers please consult your 3 <SUP> rd </SUP> party vendor to find out their recommended way to migrate from CSP’s to KSP’s. This would also include those certification authorities that use Hardware Storage Modules (HSM). </P> <BR /> <P> Steps 1 -9 in the article further explain backing up the CA configuration, and then changing from CSP’s over to KSP’s. This is required as I mentioned earlier, since SHA2 algorithms are only supported by Key Storage Providers (KSP) which was not possible prior to Windows Server 2008 Certification Authorities. If you previously migrated your Windows Server 2003 CA to one of the newer operating systems you were previously kind of stuck using CSP’s. </P> <BR /> <P> Step 10 is all about switching over to use SHA2 algorithms, and then starting the Certification Authority back up. </P> <BR /> <P> So there you go. You have your existing Certification Authority issuing SHA2 algorithm certificates and CRLS. This does not mean that you will start seeing the SHA256 RSA for signature algorithm or SHA256 for signature hash algorithm on the certification authority’s certificates. For that to happen you would need to do the following: </P> <BR /> <P> · Update the configuration on the CA that issued its certificate and then renew with a new key. </P> <BR /> <P> · If it is a Root CA then you also need to renew with a new key. </P> <BR /> <P> Once the certification authority has been configured to use SHA2 hashing algorithms. not only will newly issued certificates be signed using the new hashing algorithm, all the certification authorities CRLs will also be signed using the new hashing algorithm. </P> <BR /> <P> Run: CertUtil –CRL on the certification authority; which causes the CA to generate new CRLs. Once this is done double click on one of the CRLs and you will see the new signature algorithm. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> As you can tell, not only do newly issued end entity certificates get signed using the SHA2 algorithm, so do all existing CRLs that the CA needs to publish. This is why you not only have to update the current CA certificate to use KSP’s, you also need to update the existing CA certificates as well as long as they are still issuing new CRLs. Existing CA certificates issue new CRLs until they expire, once the expiration period has happened then that CA certificate will no longer issue CRLs. </P> <BR /> <P> As you can see, asking that simple question of “can I migrate my current certification authority from SHA1 to SHA2” it’s really not such an easy question to answer for us here at Microsoft. I would suspect that most of you are like me and would like to err on the side of caution in this regard. If this was my environment I would stand up a new PKI hierarchy that is built using SHA2 algorithms from the start. Once that has been accomplished, I would test each application in the environment that leverages certificates. When I run into an application that does not support SHA2 I would contact the vendor and get on record when they are going to start supporting SHA2, or ask the application owner when they are planning to stop using the application. Once all this is documented I would revisit these end dates to see if the vendor has updated support or find out if the application owner has replaced the application with something that does support SHA2 algorithms. </P> <BR /> <P> Rob “Pass the Hashbrowns” Greene </P> <BR /> <P> <A href="#" target="_blank"> CA_properties.jpg </A> </P> </BODY></HTML> Fri, 05 Apr 2019 03:02:07 GMT Jasben 2019-04-05T03:02:07Z DFSR: Limiting the Number of Imported Replicated Folders when using DB cloning <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Feb 12, 2015 </STRONG> <BR /> <P> Hello! Warren here to talk about a very specific scenario involving the new DB cloning feature added to DFSR in Windows Server 2012 R2. The topic is how to limit or control which RFs you import on the import server in a DB cloning scenario. </P> <P> Ned Pyle has already covered in detail the topic of DB cloning over at the Filecab blog, which you can find <A href="#" target="_blank"> <I> here </I> </A> . If you do not know anything about DB cloning in DFSR, you should read Ned's blog post first before reading this blog post. Otherwise, this topic will not make much sense to you. Every DFSR admin should know about DB cloning, it is a very significant feature in DFSR on Windows Server 2012 R2. Trust me you will want to use it. </P> Why Would I Want to Limit the Number of Replicated Folders on the Import Server? <P> To understand why you may need to limit the number of RFs on an import server you need to understand these two facts: <BR /> </P> <UL> <LI> When you export a DFSR DB the export will include every Replicated Folder (RF) on the volume. </LI> <LI> The import server must have a preseeded copy of each RF on the import volume or the import will fail. </LI> </UL> <P> What this means is that if you want to setup or recover a DFSR server using DB cloning, the import and export servers must replicate a common set of RFs. If the export server has 5 RFs on the volume the import server must have the same 5 RFs preseeded on the import volume. This makes DB cloning unusable in some situations such as hub and spoke deployments. Luckily, there is a workaround. </P> What is the Workaround? <P> Fortunately, a workaround is available to limit the number of RFs on the import server. Before we discuss the guts of the workaround we will briefly discuss the DB cloning process. Please <A href="#" target="_blank"> <I> Ned's blog post </I> </A> for in-depth details. </P> <P> 1. Have at least one established RF or create a new one with one primary member </P> <P> 2. Export the DB from the source server with the cmdlet <I> "Export-DfsrClone" </I> . This Exports both the DB and a config.xml file </P> <P> 3. Preseed the RF data on the import server. </P> <P> 4. Copy the exported DB and config.xml file to the import server </P> <P> 5. Import the DB on the import server with the cmdlet <I> "Import-DfsrClone" </I> </P> <P> 6. Add the import server to the Replication Group </P> <P> 7. Configure connections </P> <P> 8. Configure membership in the RFs </P> <P> The workaround consists of editing the config.xml file created in step 2 above to remove the RFs that you will not host on the server. Simple enough! How do I edit the config.xml file? </P> How to Edit the Config.xml file <P> Editing the config.xml file will be much easier using Visual Studio or some other XML editing tool. The steps below use Visual Studio 2012 for Windows Desktop, which you can use free of charge. <BR /> <BR /> </P> <TABLE> <TBODY><TR> <TD> <P> <IMG src="" /> Note: </P> </TD> <TD> <P> Whatever tool you use must not encode the file when you save the file. XMLNotepad for example, cannot be used as it enforces saving in the UTF-8 format. I recommend using Visual Studio as it "just works" for this task and is free. </P> </TD> </TR> </TBODY></TABLE> <OL> <LI> Make a backup copy of the config.xml file just in case you make a mistake. </LI> <LI> Download Visual Studio Express 2012 for Windows Desktop from <A href="#" target="_blank"> <I> here </I> </A> and install it. </LI> <LI> Register for a free PID <A href="#" target="_blank"> <I> here </I> </A> </LI> <LI> Open Visual Studio and enter your PID when prompted. </LI> <LI> Click <B> File </B> \ <B> Open File </B> and select the config.xml file from the exported DB. </LI> <LI> Select <B> Edit </B> \ <B> Advanced </B> \ <B> Format Document </B> . This will switch the format to a hierarchical format, which is much easier to edit. </LI> <LI> Locate the RFs you want to remove from the config.xml file. (I bet you are asking, "Exactly how do I locate the RFs I want to remove in the config.xml?") </LI> </OL> How to locate your RFs in the config.xml: <P> Each RF in the config.xml can be located by these tags. &lt;DfsrReplicatedFolder&gt; and &lt;/DfsrReplicatedFolder&gt; . Each RF name located by the tags &lt;DfsrReplicatedFolderName&gt; and &lt;/DfsrReplicatedFolderName&gt;. In the example below I have identified the RF "RF01" and circled the beginning and ending tags. </P> <P> <IMG src="" /> </P> <BLOCKQUOTE> <P> 8. Locate the RF or RFs that you do not want to import. Next to the &lt;DfsrReplicatedFolder&gt; tag click the minus sign to collapse the node. You should now see the collapsed RF as a single line in the config.xml file. An example of a collapsed RF is shown below circled in red </P> <P> <IMG src="" /> </P> <OL> <LI> Highlight the collapsed RF, right click and select cut. This will remove the RF you do not want to import from the config.xml. Repeat for each RF you want to remove. </LI> <LI> Save the edited config.xml file. The Import-DfsrClone cmdlet is hardcoded to use the filename config.xml. Do not try to use another file name. </LI> </OL> <P> </P> <P> </P> </BLOCKQUOTE> <P> Now that you have your edited config.xml you can finish your DB clone as you normally would. </P> <P> Warren </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 03:01:41 GMT JustinTurner 2019-04-05T03:01:41Z Understanding ATQ performance counters, yet another twist in the world of TLAs <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 24, 2014 </STRONG> <BR /> Hello again, this is guest author <A href="#" target="_blank"> Herbert </A> from Germany. <BR /> <BR /> If you worked an Active Directory performance issue, you might have noticed a number of AD Performance counters for NTDS and “Directory Services” objects including some ATQ related counters. <BR /> <BR /> In this post, I provide a brief overview of ATQ performance counters, how to use them and discuss several scenarios we've seen. <BR /> <BR /> <B> What are all these ATQ thread counters there for anyway? </B> <BR /> <BR /> “ATQ” stands for “ <B> A </B> synchronous <B> T </B> hread <B> Q </B> ueue”. <BR /> LSASS adopted its threading library from IIS to handle Windows socket communication and uses a thread queue to handle requests from Kerberos and LDAP. <BR /> English versions of ATQ counters are named per component so you can group them together when viewing a performance log. Here is list followed by a short explanation of each ATQ counter: <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <B> Counter </B> <P> </P> </TD> <TD> <P> <B> Explanation </B> </P> </TD> </TR> <TR> <TD> <P> <B> ATQ Estimated Queue Delay </B> </P> </TD> <TD> <P> How long a request has to wait in the queue </P> </TD> </TR> <TR> <TD> <P> <B> ATQ Outstanding Queued Requests </B> </P> </TD> <TD> <P> Current number of requests in the queue </P> </TD> </TR> <TR> <TD> <P> <B> ATQ Request Latency </B> </P> </TD> <TD> <P> Time it takes to process a request </P> </TD> </TR> <TR> <TD> <P> <B> ATQ Threads LDAP </B> </P> </TD> <TD> <P> The number of threads used by the LDAP server as determined by LDAP policy. </P> </TD> </TR> <TR> <TD> <P> <B> ATQ Threads Other </B> </P> </TD> <TD> <P> Threads used by other component, in this case the KDC </P> </TD> </TR> <TR> <TD> <P> <B> ATQ Threads Total </B> </P> </TD> <TD> <P> All Threads currently allocated </P> </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <P> <STRONG> More details on the counters <BR /> ATQ Threads Total <BR /> </STRONG> This counter tracks the total number of threads from the <B> ATQ Threads LDAP </B> and <B> ATQ Threads Other </B> counters. The maximum number of threads that a given DC can apply to incoming workloads can be found my multiplying the product of <A href="#" target="_blank"> MaxPoolThreads </A> times the number of logical CPU cores. MaxPoolThreads defaults to a value of 4 in LDAP Policy and should not be modified without understanding the implications. <BR /> <BR /> When viewing performance logs from a performance challenged DC: <BR /> <BR /> </P> <UL> <LI> Compare the “ATQ Threads Total” counter with the other two “ATQ Threads…” counters. If the “ATQ Threads LDAP” counter equals “ATQ Threads Total” then all of the LDAP listen threads are stuck processing LDAP requests currently. If the “ATQ Threads Other” counter equals “ATQ Threads Total”, then all of the LDAP listen threads are busy responding to Kerberos related traffic. </LI> <LI> Similarly, note how close the current value for ATQ Thread total is to the max value recorded in the trace and whether both values are using the maximum number of threads supported by the DC being monitored. </LI> </UL> <BR /> <BR /> Note that the value for the current number of <B> ATQ Threads Total </B> does not have to match the maximum value as the thread count will increase and decrease based on load. Pay attention when the current value for this counter matches the total # of threads supported by the DC being monitored. <BR /> <BR /> <B> ATQ Threads LDAP <BR /> </B> This is the number of threads currently servicing LDAP requests. If there are a significant number of concurrent LDAP queries being processed, check for <BR /> <BR /> <UL> <LI> Expensive or Inefficient LDAP queries </LI> <LI> Excessive numbers of LDAP queries </LI> <LI> An Insufficient number of DCs to service the workload (or existing DCs are undersized) </LI> <LI> Memory, CPU or disk bottlenecks on the DC </LI> </UL> <BR /> <BR /> Large values for this counter are common but the thread count should remain less than the total # of threads supported by your DC. The ATQ Threads LDAP and other ATQ counters are captured by the built-in AD Diagnostic Data Collector Set documented in this <A href="#" target="_blank"> blog entry </A> . <BR /> <BR /> Follow these guides if applications are generating expensive queries: <BR /> <BR /> <UL> <LI> <A href="#" target="_blank"> DC Optimization </A> </LI> <LI> <A href="#" target="_blank"> Writing efficient directory-enabled applications </A> </LI> </UL> <BR /> <BR /> The <B> ATQ Threads LDAP </B> counter could also run “hot” for reasons that are initially triggered by LDAP but are ultimately affected by external reasons: <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <B> External factors Scenario </B> <P> </P> </TD> <TD> <P> <B> Symptom and Cause </B> </P> </TD> <TD> <P> <B> Resolution </B> </P> </TD> </TR> <TR> <TD> <P> <B> Scenario 1 </B> <BR /> <BR /> DC locator traffic (LDAP ping) from clients whose IP address doesn't map to an AD site <BR /> <BR /> The LDAP server performs an exhaustive address lookup to discover additional client IP addresses so that it may find a site to map to the client. <B> </B> </P> </TD> <TD> <P> LDAP, Kerberos and DC locator responses are slow or time out <BR /> <BR /> Netlogon event 5807 may be logged within a four hour window. <BR /> <BR /> According to the name resolution response or time-out, the related LDAP ping is locking one of the threads of the limited Active Thread Queue (ATQ) pool. Many of these LDAP pings over a longer time may constantly exhaust the ATQ pool. Because the same pool is required for regular LDAP and Kerberos requests, the domain controller may become unresponsive to unavailable to users and applications. </P> </TD> <TD> <P> The problem is described in KB article <A href="#" target="_blank"> 2668820 </A> . Install corrective fixes and policy documented in KB <A href="#" target="_blank"> 2922852 </A> . </P> </TD> </TR> <TR> <TD> <P> <B> Scenario 2 </B> <BR /> <BR /> DC supports LDAP over SSL/TLS <BR /> <BR /> A user sends a certificate on a session. The server need to check for certificate revocation which may take some time. <B> </B> </P> </TD> <TD> <P> <BR /> This becomes problematic if network communication is restricted and the DC cannot reach the Certificate Distribution Point (CDP) for a certificate. </P> </TD> <TD> <P> To determine if your clients are using secure LDAP (LDAPs), check the counter "LDAP New SSL Connections/sec". <BR /> <BR /> If there are a significant number of sessions, you might want to look at <A href="#" target="_blank"> CAPI-Logging </A> . <BR /> <BR /> See the details below </P> </TD> </TR> </TBODY></TABLE> <BR /> <BR /> <P> For scenario 2: Depending on the details, there are a few approaches to remove the bottleneck: <BR /> <BR /> </P> <OL> <LI> In certificate manager, locate the certificate used for LDAPs for the account in question and in the general pane, select the item “Enable only the following purposes” and uncheck the CLIENT_AUTHENTICATION purpose. The Internet Proxy and Universal Access Gateway team sees this more often for reserve proxy scenarios, this guide describes the Windows Server 2003 UI: <A href="#" target="_blank"> </A> </LI> <LI> Use different certificates that can be checked in the internal network, or remove the CLIENT_AUTHENTICATION purpose on new certificates. </LI> <LI> Allow the DC to access the real CDP, maybe allow it to traverse the proxy to the Internet. It’s quite possible that your security department goes a bit frantic on the idea. </LI> <LI> Shorten the time-out for CRL checks so the DC gives up faster, see ChainUrlRetrievalTimeoutMilliseconds and ChainRevAccumulativeUrlRetrievalTimeoutMilliseconds on <A href="#" target="_blank"> TechNet </A> . This does not avoid the problem, but reduces the performance impact. </LI> <LI> You can suppress the “invitation” to send certificates by not sending a list of trusted roots in the local store by using <A href="#" target="_blank"> SendTrustedIssuerList=0 </A> . This does not help if the client is coded to always include a certificate if a suitable certificate is present. The Microsoft LDAP client defaults to doing this, thus: </LI> <LI> Change the client application to not include the user certificate. This requires setting an LDAP session option before starting the actual connection. In <A href="#" target="_blank"> LDAP API </A> set the option: </LI> </OL> <BR /> <BR /> <BLOCKQUOTE> <B> LDAP_OPT_SSPI_FLAGS <BR /> </B> 0x92 <BR /> Sets or retrieves a <B> ULONG </B> value giving the flags to pass to the SSPI <A href="#" target="_blank"> <B> InitializeSecurityContext </B> </A> function. <BR /> <BR /> In System.DirectoryServices.Protocols: <P> </P> <TABLE> <TBODY><TR> <TD> <P> <B> <A href="#" target="_blank"> SspiFlag </A> </B> <B> </B> </P> </TD> <TD> <P> <B> The <A href="#" target="_blank"> SspiFlag </A> </B> <B> property specifies the flags to pass to the Security Support Provider Interface (SSPI) InitializeSecurityContext function. For more information about the InitializeSecurityContext function, see the <A href="#" target="_blank"> InitializeSecurityContext </A> function topic in the MSDN library </B> </P> </TD> </TR> </TBODY></TABLE> <P> From InitializeSecurityContext: </P> <TABLE> <TBODY><TR> <TD> <P> <B> <A href="#" target="_blank"> ISC_REQ_USE_SUPPLIED_CREDS </A> </B> </P> </TD> <TD> <P> <B> Schannel must not attempt to supply credentials for the client automatically </B> </P> </TD> </TR> </TBODY></TABLE> </BLOCKQUOTE> <BR /> <BR /> <P> <B> ATQ Threads Other <BR /> </B> You can also have external dependencies generating requests that hit the Kerberos Key Distribution Center (KDC). <BR /> One common operation is getting the list of global and universal groups from a DC that is not a Global Catalog (GC). <BR /> A 2 <SUP> nd </SUP> external and potentially intermittent root cause occurs when the <B> Kerberos Forest Search Order </B> (KFSO) feature has been enabled on Windows Server 2008 R2 and later KDCs to search trusted forests for SPNs that cannot be located in the local forest. <BR /> The worst case scenario occurs when the KDC searches both local and trusted forests for an SPN that can’t be found either because the SPN does not exist or because the search focused on an incorrect SPN. <BR /> Memory dumps from in-state KDCs will reveal a number of threads working on Kerberos Service Ticket Requests along with pending RPC calls +to remote domain controllers. <BR /> <A href="#" target="_blank"> Procdump </A> triggered by performance counters could also be used to identify the condition if the spikes last long enough to start and capture the related traffic. <BR /> <BR /> More information on KFSO can be found on <A href="#" target="_blank"> TechNet </A> including performance counters to monitor when using this feature. <BR /> <BR /> <B> ATQ Queues and ATQ Request Latency <BR /> </B> The ATQ Queue and latency counters provide statistics as to how requests are being processed. Since the type of requests can differ, the average processing time is typically not significant. An expensive LDAP query that takes minutes to execute can be masked by hundreds of fast LDAP queries or KDC requests. <BR /> <BR /> The main use of these counters is to monitor the wait time in queue and the number of requests in the queue. Any non-zero values indicate that the DC has run out of threads. <BR /> <BR /> Note the performance monitor counters have a timing behavior on the actual time a performance variable is sampled. This is quite a problem when you have a high sample interval. Thus a counter for current queue length such as “ATQ Outstanding Queued Requests” may not be reliable to show the actual degree of server overload. <BR /> <BR /> To work around the averaging problem, you have to take other counters into consideration to better validate confidence in the value. In the event of an actual wait time, there must have been requests sitting in the queue at some point in the last sample interval. The load and processing delay was just not bad enough to have at least one in the queue at the sample time-stamp. <BR /> <BR /> <B> What about other thread pools? </B> <BR /> <BR /> LSASS has a number of other worker threads, e.g. to process IPSec-handshakes. Then of course there is the land of RPC server threads for the various RPC servers. Describing all the RPC servers would take up a number of additional blog entries. You can see them listed as “load generators” in the data collector set results. <BR /> <BR /> A lot of details on LSASS ATQ performance counters, I know. But, geeks love the details. <BR /> <BR /> Cheers, <BR /> <BR /> </P> <P> Herbert </P> </BODY></HTML> Fri, 05 Apr 2019 03:01:09 GMT JustinTurner 2019-04-05T03:01:09Z Remove Lingering Objects that cause AD Replication error 8606 and friends <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Sep 15, 2014 </STRONG> <BR /> <P> Introducing the Lingering Object Liquidator </P> <BR /> <P> Hi all, Justin Turner here ---it's been a while since my last <A href="#" target="_blank"> <I> update </I> </A> . The goal of this post is to discuss what causes lingering objects and show you how to download, and then use the new GUI-based <B> Lingering Object Liquidator </B> (LOL) tool to remove them. This is a beta version of the tool, and it is currently not yet optimized for use in large Active Directory environments. </P> <BR /> <P> This is a long article with lots of background and screen shots, so plug-in or connect to a fast connection when viewing the full entry. The bottom of this post contains a link to my AD replication troubleshooting TechNet lab for those that want to get their hands dirty with the joy that comes with finding and fixing AD replication errors.&nbsp; I’ve also updated the post with a link to my Lingering Objects hands-on <A href="#" target="_blank"> lab </A> from TechEd Europe. </P> <BR /> Overview of Lingering Objects <BR /> <P> Lingering objects are objects in AD than have been created, replicated, deleted, and then garbage collected on at least the DC that originated the deletion but still exist as live objects on one or more DCs in the same forest. Lingering object removal has traditionally required lengthy cleanup sessions using tools like <A href="#" target="_blank"> <I> LDP </I> </A> or <A href="#" target="_blank"> <I> repadmin </I> </A> /removelingeringobjects. The removal story improved significantly with the release of <A href="#" target="_blank"> repldiag.exe </A> <I> </I> . We now have another tool for our tool belt: Lingering Object Liquidator. There are related topics such as “lingering links” which will not be covered in this post. </P> <BR /> Lingering Objects Drilldown <BR /> <P> The dominant causes of lingering objects are </P> <BR /> <P> 1. <STRONG> Long-term replication failures </STRONG> </P> <BR /> <P> While knowledge of creates and modifies are persisted in Active Directory forever, replication partners must inbound replicate knowledge of deleted objects within a rolling Tombstone Lifetime (TSL) # of days (default <A href="#" target="_blank"> <I> 60 or 180 days depending </I> </A> on what OS version created your AD forest). For this reason, it is important to keep your DCs online and replicating all partitions between all partners within a rolling TSL # of days. Tools like REPADMIN /SHOWREPL * /CSV, REPADMIN /REPLSUM and <A href="#" target="_blank"> <I> AD Replication Status </I> </A> should be used to continually identify and resolve replication errors in your AD forest. </P> <BR /> <P> 2. <STRONG> Time jumps </STRONG> </P> <BR /> <P> System time jump more than TSL # of days in the past or future can cause deleted objects to be prematurely garbage collected before all DCs have inbound replicated knowledge of all deletes. The protection against this is to ensure that : </P> <BR /> <OL> <BR /> <OL> <BR /> <LI> your forest root PDC is <A href="#" target="_blank"> <I> continually configured </I> </A> with a reference time source (including following FSMO transfers </LI> <BR /> <LI> All other DCs in the forest are configured to use NT5DS hierarchy </LI> <BR /> <LI> Time rollback and roll-forward protection has been enabled via the <A href="#" target="_blank"> <I> maxnegphasecorrection </I> </A> and <A href="#" target="_blank"> <I> maxposphasecorrection </I> </A> registry settings or their policy-based equivalents. </LI> <BR /> </OL> <BR /> </OL> <BR /> <P> The importance of configuring safeguards can't be stressed enough. Look at this <A href="#" target="_blank"> <I> post </I> </A> to see what happens when time gets out of whack. </P> <BR /> <P> 3. <B> USN Rollbacks </B> </P> <BR /> <P> USN rollbacks are caused when the contents of an Active Directory database move back in time via an unsupported restore. Root causes for USN Rollbacks include: </P> <BR /> <UL> <BR /> <LI> Manually copying previous version of the database into place when the DC is offline </LI> <BR /> <LI> P2V conversions in multi-domain forests </LI> <BR /> <LI> Snapshot restores of physical and especially virtual DCs. For virtual environments, both the virtual host environment AND the underlying guest DCs should be <A href="#" target="_blank"> <I> Virtual Machine Generation ID capable </I> </A> . Windows Server 2012 or later. Both <A href="#" target="_blank"> <I> Microsoft </I> </A> and <A href="#" target="_blank"> <I> VMWARE </I> </A> make VM-Generation ID aware Hyper-V host. </LI> <BR /> </UL> <BR /> <P> <B> Events, errors and symptoms that indicate you have lingering objects </B> </P> <B> <BR /> </B> <P> <B> </B> Active Directory logs an array of events and replication status codes when lingering objects are detected. It is important to note that while errors appear on the destination DC, it is the source DC being replicated from that contains the lingering object that is blocking replication. A summary of events and replication status codes is listed in the table below: </P> <BR /> <TABLE> <TBODY><TR> <TD> <B> Event or Error status </B> </TD> <TD> <B> Event or error text </B> </TD> <TD> <B> Implication </B> </TD> </TR> <TR> <TD> AD Replication status <A href="#" target="_blank"> <B> <I> 8606 </I> </B> </A> <I> </I> </TD> <TD> "Insufficient attributes were given to create an object. This object may not exist because it may have been deleted." </TD> <TD> Lingering objects are present on the source DC (destination DC is operating in Strict Replication Consistency mode) </TD> </TR> <TR> <TD> AD Replication status <A href="#" target="_blank"> <B> <I> 8614 </I> </B> </A> </TD> <TD> The directory service cannot replicate with this server because the time since the last replication with this server has exceeded the tombstone lifetime. </TD> <TD> Lingering objects likely exist in the environment </TD> </TR> <TR> <TD> AD Replication status 8240 </TD> <TD> There is no such object on the server </TD> <TD> Lingering object may exist on the source DC </TD> </TR> <TR> <TD> Directory Service <A href="#" target="_blank"> <B> <I> event ID 1988 </I> </B> </A> </TD> <TD> Active Directory Domain Services Replication encountered the existence of objects in the following partition that have been deleted from the local domain controllers (DCs) Active Directory Domain Services database. </TD> <TD> Lingering objects exist on the source DC specified in the event <BR /> <P> (Destination DC is running with Strict Replication Consistency) </P> <BR /> </TD> </TR> <TR> <TD> Directory Service <A href="#" target="_blank"> <B> <I> event </I> </B> <I> <B> ID 1388 </B> </I> </A> </TD> <TD> This destination system received an update for an object that should have been present locally but was not. </TD> <TD> Lingering objects were reanimated on the DC logging the event <BR /> <P> Destination DC is running with Loose Replication Consistency </P> <BR /> </TD> </TR> <TR> <TD> Directory Service <A href="#" target="_blank"> <B> <I> event ID 2042 </I> </B> </A> </TD> <TD> It has been too long since this server last replicated with the named source server. </TD> <TD> Lingering object may exist on the source DC </TD> </TR> </TBODY></TABLE> <BR /> <P> <B> A comparison of Tools to remove Lingering Objects </B> </P> <B> <BR /> </B> <P> <B> </B> </P> <BR /> <P> The table below compares the Lingering Object Liquidator with currently available tools that can remove lingering objects </P> <BR /> <TABLE> <TBODY><TR> <TD> <B> Removal method </B> </TD> <TD> <B> Object / Partition &amp; and Removal Capabilities </B> </TD> <TD> <B> Details </B> </TD> </TR> <TR> <TD> Lingering Object Liquidator </TD> <TD> Per-object and per-partition removal <BR /> <P> Leverages: </P> <BR /> <UL> <BR /> <LI> RemoveLingeringObjects LDAP rootDSE modification </LI> <BR /> <LI> DRSReplicaVerifyObjects method </LI> <BR /> </UL> <BR /> </TD> <TD> <BR /> <UL> <BR /> <LI> GUI-based. </LI> <BR /> <LI> Quickly displays all lingering objects in the forest to which the executing computer is joined. </LI> <BR /> <LI> Built-in discovery via DRSReplicaVerifyObjects method </LI> <BR /> <LI> Automated method to remove lingering objects from all partitions </LI> <BR /> <LI> Removes lingering objects from all DCs (including RODCs) but not lingering links. </LI> <BR /> <LI> Windows Server 2008 and later DCs (will not work against Windows Server 2003 DCs) </LI> <BR /> </UL> <BR /> </TD> </TR> <TR> <TD> Repldiag /removelingeringobjects </TD> <TD> Per-partition removal <BR /> <P> Leverages: </P> <BR /> <UL> <BR /> <LI> DRSReplicaVerifyObjects method </LI> <BR /> </UL> <BR /> </TD> <TD> <BR /> <UL> <BR /> <LI> Command line only </LI> <BR /> <LI> Automated method to remove lingering objects from all partitions </LI> <BR /> <LI> Built-in discovery via DRSReplicaVerifyObjects </LI> <BR /> <LI> Displays discovered objects in events on DCs </LI> <BR /> <LI> Does not remove lingering links. Does not remove lingering objects from RODCs (yet) </LI> <BR /> </UL> <BR /> </TD> </TR> <TR> <TD> LDAP RemoveLingeringObjects rootDSE primative (most commonly executed using LDP.EXE or an LDIFDE import script) </TD> <TD> Per-object removal </TD> <TD> <BR /> <UL> <BR /> <LI> Requires a separate discovery method </LI> <BR /> <LI> Removes a single object per execution unless scripted. </LI> <BR /> </UL> <BR /> </TD> </TR> <TR> <TD> Repadmin /removelingeringobjects </TD> <TD> Per-partition removal <BR /> <P> Leverages: </P> <BR /> <UL> <BR /> <LI> DRSReplicaVerifyObjects method </LI> <BR /> </UL> <BR /> </TD> <TD> <BR /> <UL> <BR /> <LI> Command line only </LI> <BR /> <LI> Built-in discovery via DRSReplicaVerifyObjects </LI> <BR /> <LI> Displays discovered objects in events on DCs </LI> <BR /> <LI> Requires many executions if a comprehensive (n * n-1 pairwise cleanup is required. Note: repldiag and the Lingering Object Liquidator tool automate this task. </LI> <BR /> </UL> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <P> The Repldiag and Lingering Object Liquidator tools are preferred for lingering object removal because of their ease of use and holistic approach to lingering object removal. </P> <BR /> <P> <B> Why you should care about lingering object removal </B> <B> </B> </P> <BR /> <P> Widely known as the gift that keeps on giving, it is important to remove lingering objects for the following reasons </P> <BR /> <UL> <BR /> <LI> Lingering objects can result in a long term divergence for objects and attributes residing on different DCs in your Active Directory forest </LI> <BR /> <LI> The presence of lingering objects prevents the replication of newer creates, deletes and modifications to destination DCs configured to use strict replication consistency. These un-replicated changes may apply to objects or attributes on users, computers, groups, group membership or ACLS. </LI> <BR /> <LI> Objects intentionally deleted by admins or application continue to exist as live objects on DCs that have yet to inbound replicate knowledge of the deletes. </LI> <BR /> </UL> <BR /> <P> Once present, lingering objects rarely go away until you implement a comprehensive removal solution. Lingering objects are the unwanted houseguests in AD that you just can't get rid of. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Mother in law jokes… a timeless classic. </P> <BR /> <P> We commonly find these little buggers to be the root cause of an array of symptom ranging from logon failures to Exchange, Lync and AD DS service outages. Some outages are resolved after some lengthy troubleshooting only to find the issue return weeks later. </P> <BR /> <P> The remainder of this post, we will give you everything needed to eradicate lingering objects from your environment using the Lingering Object Liquidator. </P> <BR /> <P> Repldiag.exe is another tool that will automate lingering object removal. It is good for most environments, but it does not provide an interface to see the objects, clean up RODCs (yet) or remove abandoned objects. </P> <BR /> <H2> Introducing Lingering Object Liquidator </H2> <BR /> <TABLE> <TBODY><TR> <TD> <IMG src="" /> More: </TD> <TD> Lingering Object Liquidator automates the discovery and removal of lingering objects by using the <A href="#" target="_blank"> <B> <I> DRSReplicaVerifyObjects </I> </B> </A> <B> </B> method used by repadmin /removelingeringobjects and repldiag combined with the <A href="#" target="_blank"> <B> <I> removeLingeringObject </I> </B> </A> rootDSE primitive used by LDP.EXE. Tool features include: <BR /> <UL> <BR /> <LI> Combines both discovery and removal of lingering objects in one interface </LI> <BR /> <LI> Is available via the Microsoft Connect site </LI> <BR /> <LI> The version of the tool at the Microsoft Connect site is an early beta build and does not have the fit and finish of a finished product </LI> <BR /> <LI> Feature improvements beyond what you see in this version are under consideration </LI> <BR /> </UL> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <H3> How to obtain Lingering Object Liquidator </H3> <P> Updated October 9th, 2017 with link to new released version. </P> <BR /> <P> Download LoL from this link: <A href="#" target="_blank"> </A> Read about this newer released version here: <A href="#" target="_blank"> </A> </P> <P> If the download does not work, try opening the link from an InPrivate browser tab.&nbsp; If it still does not work, follow these steps: </P> <BR /> <P> 1. Log on to the Microsoft Connect site (using the <B> Sign in </B> ) link with a Microsoft account: </P> <BR /> <BLOCKQUOTE> <BR /> <P> <I> <A href="#" target="_blank"> </A> </I> </P> <BR /> </BLOCKQUOTE> <BR /> <BLOCKQUOTE> <BR /> <P> Note: You may have to create a profile on the site if you have never participated in Connect. </P> <BR /> </BLOCKQUOTE> <BR /> <P> 2. Open the Non-feedback Product Directory: </P> <BR /> <BLOCKQUOTE> <BR /> <P> <A href="#" target="_blank"> <I> </I> </A> </P> <BR /> </BLOCKQUOTE> <BR /> <P> 3. Join the following program: </P> <BR /> <BLOCKQUOTE> <BR /> <P> <B> AD Health </B> </P> <BR /> </BLOCKQUOTE> <BR /> <BLOCKQUOTE> <BR /> <P> Product Azure Active Directory Connection <A href="#" target="_blank"> <I> Join link </I> </A> </P> <BR /> </BLOCKQUOTE> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 4. Click the <A href="#" target="_blank"> <I> Downloads </I> </A> link to see a list of downloads or this <A href="#" target="_blank"> link </A> <I> </I> to go directly to the Lingering Objects Liquidator download. (Note: the direct link may become invalid as the tool gets updated.) <BR /> <BR /> Updated 8/01/2016 with link to latest version of the tool </P> <BR /> <P> 5. Download all associated files </P> <BR /> <P> 6. Double click on the downloaded executable to open the tool. </P> <BR /> Tool Requirements <BR /> <P> 1. Install Lingering Object Liquidator on a DC or member computer in the forest you want to remove lingering objects from. </P> <BR /> <P> 2. .NET 4.5 must be installed on the computer that is executing the tool. </P> <BR /> <P> 3. Permissions: The user account running the tool must have Domain Admin credentials for each domain in the forest that the executing computer resides in. Members of the Enterprise Admins group have domain admin credentials in all domains within a forest by default. Domain Admin credentials are sufficient in a single domain or single domain forest. </P> <BR /> <P> 4. The admin workstation must have connectivity over the same port and protocol required of a domain-joined member computer or domain controller against any DC in the forest. Protocols of interest include DNS, Kerberos, RPC, LDAP and ephemeral port range used by the targeted DC See <A href="#" target="_blank"> <I> TechNet </I> </A> for more detail. Of specific concern: Pre-W2K8 DCs communicate over the “low” ephemeral port between 1024 and 5000 while post W2K3 DCs use the “high” ephemeral port range between 49152 to 65535. Environments containing both OS version families will need to enable connectivity over both port ranges. </P> <BR /> <P> 5. You must enable the Remote Event Log Management (RPC) firewall rule on any DC that needs scanning. Otherwise, the tool displays a window stating, "Exception: The RPC server is unavailable" </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 6. The liquidation of lingering objects in AD Lightweight Directory Services (AD LDS / ADAM) environments is not supported. </P> <BR /> <P> 7. You cannot use the tool to cleanup lingering objects on DCs running Windows Server 2003.&nbsp; The tool leverages the event subscriptions feature which wasn’t added until Windows Server 2008. </P> <BR /> Walkthrough <BR /> <P> Lingering Object Detection: <BR /> <BR /> Run the tool as Domain Administrator (Enterprise Administrator if you want to scan the entire forest) Error 8453 is observed if the tool is not run elevated. </P> <BR /> <P> 1. Launch <B> LoL.exe </B> . </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 2. From the <B> <STRONG> Topology Detection </STRONG> </B> section, select <B> Fast </B> . <BR /> <BR /> Fast detection populates the Naming Context, Reference DC and Target DC lists by querying the local DC. Thorough detection does a more exhaustive search of all DCs and leverages DC Locator and DSBind calls. Note that Thorough detection will likely fail if one or more DCs are unreachable. </P> <BR /> <P> 3. Take a quick walk through the UI: </P> <BR /> <P> <B> Naming Context: </B> </P> <BR /> <P> <IMG src="" /> <B> </B> </P> <BR /> <P> <B> Reference DC: </B> the DC you will compare to the target DC. The reference DC hosts a writeable copy of the partition. <B> </B> </P> <BR /> <P> <IMG src="" /> <B> </B> </P> <BR /> <P> Note: ChildDC2 should not be listed here since it is an RODC, and RODCs are not valid reference DCs for lingering object removal. </P> <BR /> <TABLE> <TBODY><TR> <TD> <IMG src="" /> More: </TD> <TD> The version of the tool is still in development and does not represent the finished product. In other words, expect crashes, quirks and everything else normally encountered with beta software. </TD> </TR> </TBODY></TABLE> <BR /> <P> <B> Target DC: </B> the DC that lingering objects are to be removed from <B> </B> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 4. Click <B> Detect </B> to use these DCs for the comparison. If you want to scan all partitions and all DCs: Leave all fields blank to have the entire environment scanned, and then click <B> Detect. </B> </P> <BR /> <P> The tool does a comparison amongst all DCs for all partitions in a pairwise fashion when all fields are left blank. In a large environment, this comparison will take a great deal of time (possibly even days) as the operation targets (n * (n-1)) number of DCs in the forest for all locally held partitions. For shorter, targeted operations, select a naming context, reference DC and target DC. The reference DC must hold a writable copy of the selected naming context. Note that clicking <B> Stop </B> does not actually stop the server-side API, it merely stops the work in the client-side tool. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> During the scan, several buttons are disabled. The current count of lingering objects is displayed in the status bar at the bottom of the screen along with the current tool status. During this execution phase, the tool runs in an advisory mode and reads the event log data reported on each target DC. </P> <BR /> <P> Note: The Directory Service event log may completely fill up if the environment contains large numbers of lingering objects and the Directory Services event log is using its default maximum log size. The tool leverages the same lingering object discovery method as repadmin and repldiag, logging one event per lingering object found. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> When the scan is complete, the status bar updates, buttons are re-enabled and total count of lingering objects is displayed. The Result pane at the bottom of the window updates with any errors encountered during the scan. </P> <BR /> <P> If you see error 1396 or Error 8440 in the status pane, you are using an early beta-preview version of the tool and should use the latest version. <BR /> <BR /> Error 1396 is logged if the tool incorrectly used an RODC as a reference DC. <BR /> <BR /> Error 8440 is logged when the targeted reference DC doesn't host a writable copy of the partition. </P> <BR /> <TABLE> <TBODY><TR> <TD> <IMG src="" /> Note: </TD> <TD> Lingering Object Liquidator discovery method <BR /> <UL> <BR /> <LI> Leverages DRSReplicaVerifyObjects method in Advisory Mode </LI> <BR /> <LI> Runs for all DCs and all Partitions </LI> <BR /> <LI> Collects lingering object event ID 1946s and displays objects in main content pane </LI> <BR /> <LI> List can be exported to CSV for offline analysis (or modification for import) </LI> <BR /> <LI> Supports import and removal of objects from CSV import (leverage for objects not discoverable using DRSReplicaVerifyObjects) </LI> <BR /> <LI> Supports removal of objects by DRSReplicaVerifyObjects and LDAP rootDSE removeLingeringobjects modification </LI> <BR /> </UL> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <P> The tool leverages the Advisory Mode method exposed by <A href="#" target="_blank"> <I> DRSReplicaVerifyObjects </I> </A> that both repadmin /removelingeringobjects /Advisory_Mode and repldiag /removelingeringobjects /advisorymode use. In addition to the normal <I> Advisory Mode </I> related events logged on each DC, it displays each of the lingering objects within the main content pane. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Results of the scan are logged in the Results pane. Many more details of all operations are logged in the <B> linger&lt;Date-TimeStamp&gt;.log.txt </B> file in the same directory as the tool's executable. </P> <BR /> <P> The <B> Export </B> button allows you to export a list of all lingering objects listed in the main pane into a CSV file. View the file in Excel, modify if necessary and use the <B> Import </B> button later to view the objects without having to do a new scan. The Import feature is also useful if you discover abandoned objects (not discoverable with DRSReplicaVerifyObjects) that you need to remove. We briefly discuss abandoned objects later in this post. </P> <BR /> <P> A note about <B> transient </B> lingering objects: </P> <BR /> <P> Garbage collection is an independent process which runs on each DC every 12 hours by default. One of its jobs is to remove objects that have been deleted and have existed as a tombstone for greater than the tombstone lifetime number of days. There is a rolling 12-hour period where an object eligible for garbage collection exists on some DCs but has already been removed by the garbage collection process on other DCs. These objects will also be reported as lingering object by the tool, however no action is required as they will automatically get removed the next time the garbage collector process runs on the DC. </P> <BR /> Removal of individual objects <BR /> <P> The tool allows you to remove objects a handful at a time, if desired, using the <B> Remove </B> button: </P> <BR /> <P> 5. To remove individual objects, you can select a single object or multi-select multiple objects using the Ctrl or SHIFT keys. (hold down the <B> Ctrl </B> key to select multiple objects, or the <B> SHIFT </B> key to select a range of objects) and then select <B> Remove </B> . </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The status bar updates with the new count of lingering objects and the status of the removal operation: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Logging for removed objects </P> <BR /> <P> The tool dumps a list of attributes for each object before removal and logs this along with the results of the object removal in the <B> removedLingeringObjects.log.txt </B> log file. This log file is in the same location as the tool's executable. </P> <BR /> <P> C:\tools\LingeringObjects\removedLingeringObjects&lt;DATE-TIMEStamp.log.txt </P> <BR /> <P> the obj DN: &lt;GUID=0bb376aa1c82a348997e5187ff012f4a&gt;;&lt;SID=010500000000000515000000609701d7b0ce8f6a3e529d669f040000&gt;;CN=Dick Schenk,OU=R&amp;D,DC=root,DC=contoso,DC=com </P> <BR /> <P> <BR /> <BR /> </P> <BR /> <P> objectClass:top, person, organizationalPerson, user; </P> <BR /> <P> sn:Schenk ; </P> <BR /> <P> whenCreated:20121126224220.0Z; </P> <BR /> <P> name:Dick Schenk; </P> <BR /> <P> objectSid:S-1-5-21-3607205728-1787809456-1721586238-1183;primaryGroupID:513; </P> <BR /> <P> sAMAccountType:805306368; </P> <BR /> <P> uSNChanged:32958; </P> <BR /> <P> objectCategory:&lt;GUID=11ba1167b1b0af429187547c7d089c61&gt;;CN=Person,CN=Schema,CN=Configuration,DC=root,DC=contoso,DC=com; </P> <BR /> <P> whenChanged:20121126224322.0Z; </P> <BR /> <P> cn:Dick Schenk; </P> <BR /> <P> uSNCreated:32958; </P> <BR /> <P> l:Boulder; </P> <BR /> <P> distinguishedName:&lt;GUID=0bb376aa1c82a348997e5187ff012f4a&gt;;&lt;SID=010500000000000515000000609701d7b0ce8f6a3e529d669f040000&gt;;CN=Dick Schenk,OU=R&amp;D,DC=root,DC=contoso,DC=com; </P> <BR /> <P> displayName:Dick Schenk ; </P> <BR /> <P> st:Colorado; </P> <BR /> <P> dSCorePropagationData:16010101000000.0Z; </P> <BR /> <P>; </P> <BR /> <P> givenName:Dick; </P> <BR /> <P> instanceType:0; </P> <BR /> <P> sAMAccountName:Dick; </P> <BR /> <P> userAccountControl:650; </P> <BR /> <P> objectGUID:aa76b30b-821c-48a3-997e-5187ff012f4a; </P> <BR /> <P> value is :&lt;GUID=70ff33ce-2f41-4bf4-b7ca-7fa71d4ca13e&gt;:&lt;GUID=aa76b30b-821c-48a3-997e-5187ff012f4a&gt; </P> <BR /> <P> Lingering Obj CN=Dick Schenk,OU=R&amp;D,DC=root,DC=contoso,DC=com is removed from the directory, mod response result code = Success </P> <BR /> <P> ---------------------------------------------- </P> <BR /> <P> RemoveLingeringObject returned Success </P> <BR /> <H3> Removal of all objects </H3> <BR /> <P> The <B> Remove All </B> button, removes all lingering objects from all DCs in the environment. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <B> To remove all lingering objects from the environment: </B> </P> <BR /> <P> 1. Click the <B> Remove All </B> button. The status bar updates with the count of lingering objects removed. (the count may differ to the discovered amount due to a bug in the tool-this is a display issue only and the objects are actually removed) </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 2. Close the tool and reopen it so that the main content pane clears. </P> <BR /> <P> 3. Click the <B> Detect </B> button and verify no lingering objects are found. </P> <BR /> <P> <IMG src="" /> </P> <BR /> Abandoned object removal using the new tool <BR /> <P> None of the currently available lingering object removal tools will identify a special sub-class of lingering objects referred to internally as, "Abandoned objects". </P> <BR /> <P> An abandoned object is an object created on one DC that never got replicated to other DCs hosting a writable copy of the NC but does get replicated to DCs/GCs hosting a read-only copy of the NC. The originating DC goes offline prior to replicating the originating write to other DCs that contain a writable copy of the partition. </P> <BR /> <P> The lingering object liquidator tool does not currently discover abandoned objects automatically so a manual method is required. </P> <BR /> <P> 1. Identify abandoned objects based on Oabvalidate and replication metadata output. </P> <BR /> <BLOCKQUOTE> <BR /> <P> Abandoned objects can be removed with the LDAP RemoveLingeringObject rootDSE modify procedure, and so Lingering Objects Liquidator is able to remove these objects. </P> <BR /> </BLOCKQUOTE> <BR /> <P> 2. Build a CSV file for import into the tool. Once, they are visible in the tool, simply click the Remove button to get rid of them. </P> <BR /> <BLOCKQUOTE> <BR /> <P> a. To create a Lingering Objects Liquidator tool importable CSV file: </P> <BR /> </BLOCKQUOTE> <BR /> <BLOCKQUOTE> <BR /> <P> Collect the data in a comma separated value (CSV) with the following data: </P> <BR /> </BLOCKQUOTE> <BR /> <TABLE> <TBODY><TR> <TD> FQDN of RWDC </TD> <TD> CNAME of RWDC </TD> <TD> FQDN of DC to remove object from </TD> <TD> DN of the object </TD> <TD> Object GUID of the object </TD> <TD> DN of the object's partition </TD> </TR> </TBODY></TABLE> <BR /> <P> 3. Once you have the file, open the <B> Lingering Objects </B> tool and select the <B> Import </B> button, browse to the file and choose Open. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 4. Select all objects and then choose <B> Remove </B> . </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Review replication metadata to verify the objects were removed. </P> <BR /> Resources <BR /> <P> For those that want even more detail on lingering object troubleshooting, check out the following: </P> <BR /> <UL> <BR /> <LI> TechNet - <A href="#" target="_blank"> <I> Fixing Replication Lingering Object Problems (Event IDs 1388, 1988, 2042) </I> </A> </LI> <BR /> <LI> KB 910205 - <A href="#" target="_blank"> <I> Information about lingering objects in a Windows Server Active Directory forest </I> </A> </LI> <BR /> <LI> KB 2028495 - <A href="#" target="_blank"> <I> Troubleshooting AD Replication error 8606: "Insufficient attributes were given to create an object" </I> </A> </LI> <BR /> <LI> Glenn LeCheminant’s weblog: <A href="#" target="_blank"> <I> Clean that Active Directory forest of lingering objects </I> </A> </LI> <BR /> </UL> <BR /> <P> To prevent lingering objects: </P> <BR /> <UL> <BR /> <LI> Actively monitor for AD replication failures using a tool like the <A href="#" target="_blank"> <I> AD Replication Status </I> </A> tool. </LI> <BR /> <LI> Resolve AD replication errors within tombstone lifetime number of days. </LI> <BR /> <LI> Ensure your DCs are operating in <A href="#" target="_blank"> <I> Strict Replication Consistency </I> </A> mode </LI> <BR /> <LI> Protect against large jumps in system time </LI> <BR /> <LI> Use only supported methods or procedures to restore DCs. Do not: <BR /> <UL> <BR /> <LI> Restore backups older than TSL </LI> <BR /> <LI> Perform snapshot restores on pre Windows Server 2012 virtualized DCs on any virtualization platform </LI> <BR /> <LI> Perform snapshot restores on a Windows Server 2012 or later virtualized DC on a virtualization host that doesn't support VMGenerationID </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <P> <B> If you want hands-on practice troubleshooting AD replication errors, check out my lab on </B> <A href="#" target="_blank"> <B> <I> TechNet Virtual labs </I> </B> </A> . Alternatively, come to an instructor-led lab at TechEd Europe 2014. "EM-IL307 Troubleshooting Active Directory Replication Errors" </P> <BR /> <P> <B> For hands-on practice troubleshooting AD lingering objects </B> : check out my lab from TechEd Europe 2014. " <A href="#" target="_blank"> EM-IL400 Troubleshooting Active Directory Lingering Objects </A> " </P> <BR /> <P> 12/8/2015 Update: This lab is now available from TechNet Virtual labs <A href="#" target="_blank"> here </A> . </P> <BR /> <P> Finally, if you would like access to a hands-on lab for in-depth lingering object troubleshooting; let us know in the comments. </P> <BR /> <P> Thank you, </P> <BR /> <P> Justin Turner and A. Conner </P> <BR /> <P> Update 2014/11/20 – Added link to TechEd Lingering objects hands-on <A href="#" target="_blank"> lab </A> </P> <BR /> <P> Update 2014/12/17 – Added text to indicate the lack of support in LOL for cleanup of Windows Server 2003 DCs </P> <BR /> <P> Update 2015/12/08 – Added link to new location of Lingering Object hands-on <A href="#" target="_blank"> lab </A> <BR /> <BR /> Update 2016/08/01 – Updated LoL download <A href="#" target="_blank"> link </A> <BR /> Update 2017/10/09 – Added download link for released version of the tool on the Microsoft Download Center <A href="#" target="_blank"> </A> </P> </BODY></HTML> Fri, 05 Apr 2019 03:01:01 GMT JustinTurner 2019-04-05T03:01:01Z Managing the Store app pin to the Taskbar added in the Windows 8.1 Update <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Sep 09, 2014 </STRONG> <BR /> Update 9/9/2014 <BR /> <BR /> Warren here yet again to update this blog to tell you that the GP to control the Store icon pin has shipped in the August 2014 update: <A href="#" target="_blank"> </A> . If you want to control the Store icon pinned to the taskbar be sure to install the August 2014 update on all the targeted machines. <BR /> <BR /> You can now have the Store disabled and the Store Icon removed via GP, or leave the Store enabled but remove the Store Icon pinned to the taskbar if that is what you need. The previous behavior of preventing the Store icon from being pinned during installation of Update 1 if the Store is disabled via GP remains unchanged. <BR /> <BR /> The new GP is named: “Do not allow pinning Store app to the Taskbar” <BR /> <BR /> The full path to the new GP is: “User ConfigurationAdministrative TemplatesStart Menu and TaskbarDo not allow pinning Store app to the Taskbar” <BR /> <BR /> Explain text for this GP: <BR /> <BR /> <BLOCKQUOTE> This policy setting allows you to control pinning the Store app to the Taskbar <BR /> <BR /> If you enable this policy setting, users cannot pin the Store app to the Taskbar. If the Store app is already pinned to the Taskbar, it will be removed from the Taskbar on next login <BR /> <BR /> If you disable or do not configure this policy setting, users can pin the Store app to the Taskbar <P> </P> </BLOCKQUOTE> <BR /> <BR /> <P> Thanks to everyone for their feedback on this issue and their patience while we developed and shipped the fix. <BR /> <BR /> =========================================================================================================== <BR /> <BR /> Update 7/14/2014 <BR /> <BR /> </P> <P> Warren here with an update on the Store icon issue. Good News! Your feedback has been heard, understood and acted upon. A fix is in the works that will address the scenarios below: </P> <BR /> <BR /> <P> </P> <BR /> <BR /> <P> <B> Scenario 1 </B> - You want to block the Store but have enabled the GP to block the Store after applying Windows 8.1 Update.&nbsp; A fix will be made to the GP, such that it will remove the Store Icon pin if the “disable Store” GP is already set. </P> <BR /> <BR /> <P> </P> <BR /> <BR /> <P> <B> Scenario 2 - </B> You want to provide access to the Store but want to remove the Store icon pin from the taskbar. A GP will be provided that can manage the Store icon pin. </P> <BR /> <BR /> <P> </P> <BR /> <BR /> <P> Thanks for all of your feedback on this issue! </P> <BR /> <BR /> <P> </P> <BR /> <BR /> <P> Warren <BR /> <BR /> =========================================================================================================== <BR /> <BR /> 4/9/2014 <BR /> <BR /> Warren here, posting with more news regarding the Windows 8.1 Update. Among the many features added by Windows 8.1 Update is that the Store icon will be pinned to the users taskbar when users first logon after updating their PC with Windows 8.1 Update. <BR /> <BR /> Some companies will not want the Store icon pinned to the taskbar on company owned devices.&nbsp; There are currently two Group Policy options to control the Store tile pin - one that you can use before deploying the update that will prevent the Store app from being pinned to the Taskbar, and another that you can use after the update has been deployed and the Store app has been pinned to the Taskbar. </P> <BR /> <BR /> <H3> Option 1:&nbsp; Turn off the Store application before Installing the Windows 8.1 </H3> <BR /> <BR /> Use the Group Policy “Turn off the Store application” <BR /> <BR /> <BR /> <BR /> As mentioned earlier, the Store Icon is pinned to the Taskbar at first logon after Windows 8.1 Update is applied. The Store application will not be pinned to the taskbar if the Group Policy “Turn off the Store application” is applied to computer. This option is not retroactive. The Group Policy must be applied to the workstation before the update is applied. The full path to this Group Policy is: <BR /> <BR /> Computer ConfigurationAdministrative TemplatesWindows ComponentsStoreTurn off the Store application <BR /> <BR /> Or <BR /> <BR /> User ConfigurationAdministrative TemplatesWindows ComponentsStoreTurn off the Store application <BR /> <BR /> You can use either Group Policy. As the name of the policy indicates, this will completely disable the Store. If your desire is to allow access to the Store but do not want the Store tile pinned to the Taskbar see option 2. <BR /> <BR /> <B> Important note: </B> By default the Group Policy setting “Turn off the Store application” will not show up in GPEDIT.MSC or GPMC.MSC if you run the tools on a Windows Server. You have two options: Install the Remote Server Admin Tools (RSAT) tools on a Windows 8.1 client and edit the group policy from that machine or install the Desktop Experience feature on the server used for editing Group Policy. The preferred method is to install the RSAT tools on a workstation. You can download the RSAT tools for Windows 8.1 here: <A href="#" target="_blank"> </A> <BR /> <BR /> <H3> Option 2:&nbsp; Use Group Policy to remove Pinned applications from the Taskbar after Installing the Update </H3> <BR /> <BR /> Use the Group Policy “Remove pinned programs from the Taskbar” <BR /> <BR /> <BR /> <BR /> This GP is a big hammer in that it will remove all pined tiles from the task bar and users subject to the policy will not be able to pin any applications or tiles to the Taskbar. This accomplishes the goal of not pinning the Store tile to the taskbar and leaves the Store accessible from Start. <BR /> <BR /> <P> User ConfigurationAdministrative TemplatesStart Menu and TaskbarRemoved pinned programs from the Taskbar” </P> <H3> Other Options </H3> <P> The last available option at this time is to have users unpin the Store app on their systems. Programmatically changing the Taskbar pins is not supported nor encouraged by Microsoft. See <A href="#" target="_blank"> </A> </P> </BODY></HTML> Fri, 05 Apr 2019 02:57:51 GMT TechCommunityAPIAdmin 2019-04-05T02:57:51Z Hate to see you go, but it’s time to move on to greener pastures. A farewell to Authorization Manger aka AzMan <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Aug 21, 2014 </STRONG> <BR /> Hi all, Jason here. Long time reader, first time blogger. AzMan is Microsoft’s tool to manage authorization to applications based on a user’s role. AzMan has been around since 2003 and has had a good run. Now it’s time to send it out to pasture. If you haven’t seen <A href="#" target="_blank"> this </A> article, AzMan has been added to list of technologies that will be eventually removed from the OS. As of Server 2012 and 2012 R2 AzMan has been marked as deprecated, which is a term we use to let our customers know that the specific technology in question will be removed in subsequent release of the OS. It has recently been announced that AzMan will no longer be in future releases of the Windows Server OS (after 2012 R2). <BR /> <BR /> What does this mean to you? If you are on a newer OS and use Azman, not much (right now). If you use AzMan on say for example Server 2003, you need to either get AzMan prepped and ready on a newer OS or find a suitable replacement for role based authorization. Keep in mind each OS has its own life cycle so AzMan isn’t immediately going away. We have well into 2023 before we see the last of AzMan. AzMan will continue to work on whichever OS you are currently using it on just be aware of the OS life cycle to make sure that your OS is supported and as such your implementation of AzMan. The obvious question here is, where do we go? <BR /> <BR /> The best answer would be moving your application to be claims aware. Claims allow you to make decisions on authorization based on data sent within the claim token. Want access based on user group in AD? Sounds like you want claims. Want authorization to a specific site based on whom your manager is? Claims can do that. I don’t want to make it sound like this is an immediate “click here and it fixes everything for you”, you will have to do recoding on your application to be able to consume claims sent by a claim provider and that isn’t going to be flowers and unicorns. There will be some hard work to move it over, however the gains will be huge as there has been a large surge in claims based applications and services in the last few years (O365 included). Windows already has a claims provider you can use to build claims tokens and send to your application (this is ADFS if you haven’t heard, I’d be surprised if you haven’t) and it’s either already in the OS or a download away (depending on which OS you are running). If you’re are using AzMan and looking for the push to get you into the claims game, this is the nudge you’ve been looking for. <BR /> <BR /> A few things to keep in mind if you are intending to use ADFS for your claims provider: <BR /> <BR /> · ADFS is provided in 2003 R2, however this is 1.x and does not have some of the features that 2.x + has. Also, some of the terminology is different and could be confusing to start your claims experience with, not to mention 2003 is close to end of life <BR /> <BR /> · ADFS is a separate download for 2008 and 2008 R2. It is provided in the OS as a role, but this is 1.1. You definitely want the downloaded version. (Make sure to get rollup 3 KB2790338 , update KB2843638 and update KB2896713) <BR /> <BR /> · ADFS is provided in the OS on 2012 (ADFS 2.1) and 2012 R2 (ADFS 3.0) <BR /> <BR /> A few helpful links to get you started with using claims based authentication/authorization: <BR /> <BR /> Claims-Aware Applications <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> Building My First Claims-Aware ASP.NET Web Application <BR /> <BR /> <A href="#" target="_blank"> </A> <BR /> <BR /> <P> Hopefully these can give you enough of a starter to build a proof of concept and get your team ready to dive into the claims game. </P> </BODY></HTML> Fri, 05 Apr 2019 02:57:44 GMT Jasben 2019-04-05T02:57:44Z It turns out that weird things can happen when you mix Windows Server 2003 and Windows Server 2012 R2 domain controllers <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jul 23, 2014 </STRONG> <BR /> <BR /> <BR /> <STRONG> UPDATE:&nbsp; The hotfix is now available for this issue!&nbsp; Get it at </STRONG> <A href="#" target="_blank"> <STRONG> </STRONG> </A> <BR /> <BR /> <STRONG> This hotfix applies to Windows Server 2012 R2 domain controllers and should prevent the specific problem discussed below from occurring. </STRONG> <BR /> <BR /> <STRONG> It’s important to note that the symptoms of users and computers not being able to log on can happen for a number of different reasons.&nbsp; Many of the folks in the comments have posted that they have these sorts of issues but don’t have Windows Server 2003 domain controllers, for example.&nbsp; If you’re still having problems after you have applied the hotfix, please call in a support case so that we can help you get those fixed! </STRONG> <BR /> <BR /> ===================================================== <BR /> <BR /> We have been getting quite a few calls lately where Kerberos authentication fails intermittently and users are unable to log on.&nbsp; By itself, that’s a type of call that we’re used to and we help our customers with all the time.&nbsp; Most experienced AD admins know that this can happen because of broken AD replication, unreachable DCs on your network, or a variety of other environmental issues that all of you likely work hard to avoid as much as possible - because let’s face it, the last thing any admin wants is to have users unable to log in – especially intermittently. <BR /> <BR /> Anyway, we’ve been getting more calls than normal about this lately, and that led us to take a closer look at what was going on.&nbsp; What we found is that there’s a problem that can manifest when you have Windows Server 2003 and Windows Server 2012 R2 domain controllers serving the same domain.&nbsp; Since many of you are trying very hard to get rid of your last Windows Server 2003 domain controllers, you might be running into this.&nbsp; In the case of the customers that called us, the login issues were actually preventing them from being able to complete their migration to Windows Server 2012 R2. <BR /> <BR /> We want all of our customers to be running their Active Directory on the latest supported OS version, which is frankly a lot more scalable, robust, and powerful than Windows Server 2003.&nbsp; We realize that upgrading an enterprise environment is not easy, and much less so when your users start to have problem during your upgrade.&nbsp; So we’re just going to come out and say it right up front: <BR /> <BR /> <STRONG> We are working on a hotfix for this issue </STRONG> , but it’s going to take us some time to get it out to you. In the meantime, here are some details about the problem and what you can do right now. <BR /> <BR /> <B> Symptoms include: </B> <BR /> <BR /> 1. When any domain user tries to log on to their computer, the logon may fail with “unknown username or bad password”. Only local logons are successful. <BR /> <BR /> If you look in the system event log, you may notice Kerberos event IDs 4 that look like this: <BR /> <BR /> <I> Event ID: 4 <BR /> Source: Kerberos <BR /> Type: Error <BR /> "The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server host/; This indicates that the password used to encrypt the Kerberos service ticket is different than that on the target server. Commonly, this is due to identically named machine accounts in the target realm (, and the client realm.&nbsp;&nbsp; Please contact your system administrator." </I> <BR /> <BR /> <B> 2. </B> Operating Systems on which the issue has been seen: <A> <B> Windows 7, WS2008 R2, WS2012 R2 </B> </A> <B> </B> <BR /> <BR /> 3. This can affect <B> Clients and Servers </B> (including Domain Controllers) <BR /> <BR /> 4. This problem specifically occurs after the affected machine has changed its password. It can vary from a few minutes to a few hours post the change before the symptoms manifest. <BR /> <BR /> So, if you suspect you have a machine with this issue, check when it last changed its password and whether this was around the time when the issue started. <BR /> <BR /> This can be done using <I> repadmin /showobjmeta command. </I> <BR /> <BR /> <I> Example: </I> <BR /> <BR /> <B> <I> Repadmin /showobjmeta * “CN=mem01,OU=Workstations,,DC=contoso,DC=com” </I> </B> <B> </B> <BR /> <BR /> This command will get the object metadata for mem01 server from all DC’s. <BR /> <BR /> In the output check the pwdlastSet attribute and see if the timestamp is around the time you started to see the problem on this machine. <BR /> <BR /> Example: <BR /> <BR /> <B> Why this happens: </B> <BR /> <BR /> The Kerberos client depends on a “salt” from the KDC in order to create the AES keys on the client side. These AES keys are used to hash the password that the user enters on the client, and protect it in transit over the wire so that it can’t be intercepted and decrypted. The “salt” refers to information that is fed into the algorithm used to generate the keys, so that the KDC is able to verify the password hash and issue tickets to the user. <BR /> <BR /> When a Windows 2012 R2 DC is promoted in an environment where Windows 2003 DCs are present, there is a mismatch in the encryption types that are supported on the KDCs and used for salting. Windows Server 2003 DCs do not support AES and Windows Server 2012 R2 DCs don’t support DES for salting. <BR /> <BR /> You might be wondering why these encryption types matter.&nbsp; As computer hardware gets more powerful, older encryption methods become easier and easier to break.&nbsp; Thus, we are constantly incorporating newer, more powerful encryption into Windows and Kerberos in order to help protect your user passwords (and your data and your network). <BR /> <BR /> <B> Workaround: </B> <BR /> <BR /> If users are having the problem: <BR /> <BR /> Restart the computer that is experiencing the issue. This recreates the AES key as the client machine or member server reaches out to the KDC for Salt. <A> Usually, this will fix the issue temporarily </A> . (at least until the next password change). <BR /> <BR /> <BR /> <BR /> To prevent this from happening, please apply the hotfix to all Windows Server 2012 R2 domain controllers in the environment. <BR /> <BR /> <BR /> <BR /> How to prevent this from happening: <BR /> <BR /> <B> Option 1: </B> Query against Active Directory the list of computers which are about to change their machine account password and proactively reset their password against a Windows Server 2012 R2 DC and follow that by a reboot. <BR /> <BR /> There’s an advantage to doing it this way: since you are not disabling any encryption type and keeping things set at the default, you shouldn’t run into any other authentication related issue as long as the machine account password is reset successfully. <BR /> <BR /> Unfortunately, doing this will mean a reboot of machines that are about to change their passwords, so plan on doing this during non-business hours when you can safely reboot workstations. <BR /> <BR /> We’ve created a quick PowerShell script that you can run to do this. <BR /> <BR /> Sample PS script: <BR /> <BR /> &gt; Import-module ActiveDirectory <BR /> <BR /> &gt; Get-adcomputer -filter * -properties PasswordLastSet | export-csv machines.csv <BR /> <BR /> This will get you the list of machines and the dates they last set their password.&nbsp; By default machines will reset their password every 30 days.&nbsp;&nbsp; Open the created csv file in excel and identify the machines that last set their password 28 or 29 days prior (If you see a lot of machines that have dates well beyond the 30 days, it is likely these machines are no longer active). <BR /> <BR /> Reset Password: <BR /> <BR /> Once you have identified the machines that are most likely to hit the issue in the next couple of days, proactively reset their password by running the below command on those machines. <A> You can use tools such as psexec, system center or other utilities that allow you to remotely execute the command instead of logging in interactively to each mach </A> ine. <BR /> <BR /> nltest /SC_CHANGE_PWD:&lt;DomainName&gt; /SERVER:&lt;Target Machine&gt; <BR /> <BR /> Then reboot. <BR /> <BR /> <B> Option 2: </B> Disable machine password change or increase duration to 120 days. <BR /> <BR /> You should not run into this issue at all if password change is disabled. Normally we don’t recommend doing this since machine account passwords are a core part of your network security and should be changed regularly. However because it’s an easy workaround, the best mitigation right now is to set it to 120 days. That way you buy time while you wait for the hotfix. <BR /> <BR /> If you go with this approach, make sure you set your machine account password duration back to normal after you’ve applied the hotfix that we’re working on. <BR /> <BR /> Here’s the relevant Group Policy settings to use for this option: <BR /> <BR /> <B> Computer ConfigurationWindows SettingsSecurity SettingsLocal PolicesSecurity Options </B> <BR /> <BR /> <B> Domain Member:&nbsp; Maximum machine account password age: </B> <BR /> <BR /> <B> Domain Member: Disable machine account password changes: </B> <BR /> <BR /> <B> Option 3: </B> Disable AES in the environment by modifying Supported Encryption Types for Kerberos using Group Policy. This tells your domain controllers to use RC4-HMAC as the encryption algorithm, which is supported in both Windows Server 2003 and Windows Server 2012 and Windows Server 2012 R2. <BR /> <BR /> You may have heard that we had a security advisory recently to <A href="#" target="_blank"> disable RC4 in TLS </A> . Such attacks don’t apply to Kerberos authentication, but there is ongoing research in RC4 which is why new features such as <A href="#" target="_blank"> Protected Users </A> do not support RC4. Deploying this option on a domain computer will make it impossible for Protected Users to sign on, <STRONG> so be sure to remove the Group Policy once the Windows Server 2003 DCs are retired. </STRONG> <BR /> <BR /> The advantage to doing this is that once the policy is applied consistently, you don’t need to chase individual workstations. <B> </B> However, you’ll still have to reset machine account passwords and reboot computers to make sure they have new RC4-HMAC keys stored in Active Directory. <BR /> <BR /> You should also make sure that the hotfix <A href="#" target="_blank"> </A> is in place on all of your Windows 7 clients and Windows Server 2008 R2 member servers, otherwise they may have other issues. <BR /> <BR /> Remember if you take this option, then after the hotfix for this particular issue is released and applied on Windows Server 2012 R2 KDCs, you will need to modify it again in order to re-enable AES in the domain. The policy needs to be changed again and all the machines will require reboot. <BR /> <BR /> Here are the relevant group policy settings for this option: <BR /> <BR /> <B> Computer ConfigurationWindows SettingsSecurity SettingsLocal PolicesSecurity Options </B> <BR /> <BR /> <B> Network Security:&nbsp; Configure encryption types allowed for Kerberos: </B> <BR /> <BR /> Be sure to check: <B> RC4_HMAC_MD5 </B> <BR /> <BR /> If you have unix/linux clients that use keytab files that were configured with DES enable: <B> DES_CBC_CRC, DES_CBC_MD5 </B> <BR /> <BR /> Make sure that AES128_HMAC_SHA1, and AES256_HMAC_SH1 are <B> NOT Checked </B> <BR /> <BR /> Finally, if you are experiencing this issue please revisit this blog regularly for updates on the fix. <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <P> - The Directory Services Team </P> </BODY></HTML> Fri, 05 Apr 2019 02:57:38 GMT TechCommunityAPIAdmin 2019-04-05T02:57:38Z An Update about the Windows 8.1 Update <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Apr 16, 2014 </STRONG> <BR /> Hi everyone, David here.&nbsp; Today over at the <A href="#" target="_blank"> Springboard series blog </A> we announced some important news that applies to anyone who has been trying to roll out the Windows 8.1 update in an enterprise environment.&nbsp; We don’t usually do announcements about things being covered by other Microsoft blogs, but this one addresses something we’ve gotten a lot of questions about. <BR /> <BR /> If you haven’t read the blog, here’s the super-short version: <BR /> <BR /> - We have a <A href="#" target="_blank"> fix </A> for the Windows Update problem that prevents organizations from using WSUS to deploy the Windows 8.1 Update. <BR /> <BR /> - We’ll be issuing security updates for Windows 8.1 (without the update) in the catalog until August, instead of stopping next month as originally announced.&nbsp; This gives enterprises more time to test the feature changes in the Windows 8.1 Update and deploy them, without having to worry about not getting critical security updates. <BR /> <BR /> <P> <A href="#" target="_blank"> Click here </A> to read the full announcement. </P> </BODY></HTML> Fri, 05 Apr 2019 02:57:33 GMT TechCommunityAPIAdmin 2019-04-05T02:57:33Z Options for Managing Go to Desktop or Start after Sign in in Windows 8.1 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Apr 07, 2014 </STRONG> <BR /> <P> Hi, David here.&nbsp; Over the past year we’ve gotten a lot of feedback from our customers about the pain of changing from older versions of Windows over to Windows 8 and Windows 8.1.&nbsp; While it’s a great OS with a lot of compelling features, it’s a big change – and as any desktop administrator will tell you, change is a really scary thing for users who just want to be able to log in and get their work done every day.&nbsp; Well, we listened, and in the update we’re releasing this week, we’ve made it easier for you to help manage the change for your users and make the transition to Windows 8.1 a little more friendly for them.&nbsp; Below is some awesome information courtesy of the inestimable Warren Williams. </P> <BR /> <P> First, a quick history lesson.&nbsp; Don’t worry, there’s not a quiz at the end. </P> <BR /> <H3> Start Screen history </H3> <BR /> <P> Starting with Windows 8.0 Start is the main application launch pad in Windows. Start replaces the Start Menu used in previous versions of Windows going back to Windows 95. </P> <BR /> <P> With each update of Windows 8.0, more control over Start’s configuration has been added. </P> <BR /> Windows 8.0 <BR /> <P> The Start Menu was removed from Windows and replaced by Start. The default behavior in Windows 8.0 is that users always boot to Start. There was no Microsoft supported method of controlling the boot to Start behavior in Windows 8.0. </P> <BR /> Windows 8.1 <BR /> <P> In Windows 8.1 Microsoft added the ability for users and administrators to control what environment would be displayed when the user logged on. The user can either boot to the Start screen or the Desktop. The behavior was still to always boot to the Start screen however the behavior could be controlled manually with a setting in the Taskbar Navigation properties. Administrators could use a new a Group Policy “Go to the desktop instead of Start when signing in” to specify what environment the user would see after signing in. </P> <BR /> <P> Everyone got that?&nbsp; Ok, let’s talk about the new stuff now. </P> <BR /> <H3> Windows 8.1 Update </H3> <BR /> <P> In Windows 8.1 Update Microsoft added the ability for the OS to perform device type detection. After applying Windows 8.1.update Tablet devices will boot to the Start Screen and have modern application file associations. All other device types boot to the desktop and the desktop application file associations. The two preceding behaviors occur if the default setting for Taskbar Navigation properties have not changed.&nbsp; Some things to note: </P> <BR /> <P> </P> <BR /> <UL> <BR /> <LI> If customizations to the Start Screen behavior had been made by the user before applying Windows 8.1 Update those customizations will remain in effect. </LI> <BR /> <LI> Group policy will take precedence over Windows 8.1 update 1 default behavior. If the Boot to Desktop settings are controlled by Group Policy a user will not be able to makes changes to the Taskbar Navigation Properties. </LI> <BR /> </UL> <BR /> How Device Type Detection works in Windows 8.1 Update <BR /> <P> Device type detection in Windows 8.1 Update is accomplished by querying the value of Power_Platform_Role and taking action based on the value set. The value for Power_Platform_Role is set by the manufacturer of the device and cannot be changed. If the value for Power_Platfor_Role is set to a value of 8 the user will sign in to Start. <STRONG> Any value other than 8 will cause the user to sign in to the desktop, instead of the Start Screen. </STRONG> </P> <BR /> <P> The possible values for Power_Platform_Role are: </P> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> PlatformRoleUnspecified </P> <BR /> </TD> <TD> <BR /> <P> 0 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> PlatformRoleDesktop </P> <BR /> </TD> <TD> <BR /> <P> 1 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> PlatformRoleMobile </P> <BR /> </TD> <TD> <BR /> <P> 2 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> PlatformRoleWorkstation </P> <BR /> </TD> <TD> <BR /> <P> 3 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> PlatformRoleEnterpriseServer </P> <BR /> </TD> <TD> <BR /> <P> 4 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> PlatformRoleSOHOServer </P> <BR /> </TD> <TD> <BR /> <P> 5 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> PlatformRoleAppliancePC </P> <BR /> </TD> <TD> <BR /> <P> 6 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> PlatformRolePerformanceServer </P> <BR /> </TD> <TD> <BR /> <P> 7 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> PlatformRoleSlate </P> <BR /> </TD> <TD> <BR /> <P> 8 </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <P> Table 1Power_Platform_Role Values </P> <BR /> <P> See this MSDN page for more information: “ <A href="#" target="_blank"> POWER_PLATFORM_ROLE enumeration </A> ” </P> <BR /> How to query a device’s Power_Platform_Role value <BR /> <P> Run the following command at elevated cmd prompt </P> <BR /> <P> Powercfg /energyreport </P> <BR /> <P> </P> <BR /> <P> At the top of the report look for Platform role field </P> <BR /> <P> </P> <BR /> <P> <IMG /> </P> <BR /> <P> To change the default behavior using Unattend.xml see the Microsoft-Windows-Shell-Setup | DesktopOptimization | <A href="#" target="_blank"> GoToDesktopOnSignIn </A> </P> <BR /> Tablets that boot to the desktop <BR /> <P> It is possible for a tablet device to boot to the Desktop if the tablet’s Power_Platform_Role was set to a value other than 8 by the manufacturer. Windows does not set the value of Power_Platform_Role nor can the value be changed. The value is set by the device manufacturer in the BIOS and is read by Windows at boot time and stored in WMI. </P> <BR /> <P> See: “POWER_PLATFORM_ROLE enumeration” - <A href="#" target="_blank"> </A> </P> <BR /> <BR /> <H3> Options to Control Sign in to Desktop Behavior in Windows 8.1 Update </H3> <BR /> <P> Fortunately, you can change the behavior without having to be an OEM. </P> <BR /> Manually - Taskbar Navigation Properties <BR /> <BR /> <P> To manually change the environment that the user logs on to perform the following steps </P> <BR /> <P> 1. Open the desktop </P> <BR /> <P> 2. Right click on the taskbar and select properties </P> <BR /> <P> 3. Select the “Navigation” tab </P> <BR /> <P> a. If you want the Start Screen to load when a user logs on uncheck the box “When I sign in or close all apps on a screen, go to the desktop instead of Start” </P> <BR /> <P> b. If you want the Desktop to load when a user logs on check the box “When I sign in or close all apps on a screen, go to the desktop instead of Start” </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Figure 4Taskbar Navigation Properties </P> <BR /> Administrative - Group Policy <BR /> <P> A Domain Administrator can use Group Policy to control the Boot to desktop behavior on many machines from a centralized location. If Group Policy is used to control this setting the user will not be able to change the Boot to desktop behavior. If an administrator wants users to be able to set the desired behavior they should set the default behavior in their image. The Group Policy is located in the this path </P> <BR /> <P> “User Configuration\Administrative Templates\Start Menu and Taskbar\Go to the desktop instead of Start when signing in” </P> <BR /> <P> Description of this Group Policy </P> <BR /> <P> “This policy setting allows users to go to the desktop instead of the Start screen when they sign in. </P> <BR /> <P> If you enable this policy setting, users will always go to the desktop when they sign in. </P> <BR /> <P> If you disable this policy setting, users will always go to the Start screen when they sign in. </P> <BR /> <P> If you don’t configure this policy setting, the default setting for the user’s device will be used, and the user can choose to change it.” </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Figure 5Group Policy to control "Go to desktop instead of Start" behavior </P> <BR /> Administrative – Deployment using a Unattend.xml answer file <BR /> <P> Deployment Admins can specify if the user go to Start or the desktop after signing in using the DesktopOptimization tag in their unattend.xml file. This method allows admins to specify a default behavior and still allow users the ability to set their preferred Sign in environment. </P> <BR /> <P> &lt;DesktopOptimization&gt; </P> <BR /> <P> &lt;GoToDesktopOnSignIn&gt;true&lt;/GoToDesktopOnSignIn&gt; </P> <BR /> <P> &lt;/DesktopOptimization&gt; </P> <BR /> <P> For more information consult the Windows Assessment and Deployment Kit (ADK) helpfile. The ADK can be downloaded from here. <A href="#" target="_blank"> </A> </P> <BR /> <P> Hopefully this information helps all of you out there with giving your users a better experience on Windows 8.1. </P> <BR /> <P> - Warren “The Updater” Williams </P> </BODY></HTML> Fri, 05 Apr 2019 02:57:26 GMT TechCommunityAPIAdmin 2019-04-05T02:57:26Z Our UK Windows Directory Services Escalation Team is Hiring – Support Escalation Engineers. <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Feb 17, 2014 </STRONG> <BR /> Hi! Its Linda Taylor here again from the Directory Services Escalation team in the UK. In this post, I want to tell you – <B> We are hiring in the UK!! </B> <BR /> <BR /> Would you like to join the UK Escalation Team and work on the most technically challenging and interesting Active Directory problems? Do you want to be the next “Ned Pyle”? <BR /> <BR /> Then read more… <BR /> <BR /> We are an Escalation Team based in Microsoft Campus in Reading (UK). We are part of Microsoft Global Business Support and we work with enterprise customers helping them resolve the most critical Active Directory infrastructure problems as well as enabling our customers to get the best of Microsoft Windows and Identity related technologies. The work we do is no ordinary support – we work with a huge variety of customer environments and there are rarely two problems which are the same. We are the experts in our field and we work closely with the product group to help make Windows and all our other technologies better. <BR /> <BR /> You will need strong AD knowledge, great customer services skill, strong troubleshooting skills and great collaboration and team work. <BR /> <BR /> You can find more of the job details here: <BR /> <BR /> <A href="#" target="_blank">;pg=0&amp;so=&amp;rw=1&amp;jid=130665&amp;jlang=EN&amp;pp=SS </A> <BR /> <BR /> <P> Linda. </P> </BODY></HTML> Fri, 05 Apr 2019 02:57:01 GMT TechCommunityAPIAdmin 2019-04-05T02:57:01Z Adding shortcuts on desktop using Group Policy Preferences in Windows 8 and Windows 8.1 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Feb 17, 2014 </STRONG> <BR /> <P> Hi All! </P> <BR /> <P> My name is Saurabh Koshta and I am with the Core Team at Microsoft. Currently I work in the client space so supporting all aspects of Windows 8 and Windows 8.1 is my primary role. </P> <BR /> <P> We very often get calls from customers who are evaluating Windows 8/Windows 8.1 for deployment, but are concerned about some of the changes in the UI that may confuse their users. A typical concern we hear is that users are used to having shortcuts on the desktop for Computer, Documents, and Network. So, I wanted to take a minute to show you how you can easily add those shortcuts (or others) to desktops using Group Policy Preferences. </P> <BR /> <P> I have an OU in my domain called “Domain Computers”, which has Windows 8 machines. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The next step is to create a policy and link in to the “Domain Computers” OU. In this case it is called “Shortcut” </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Edit the policy and go to the following location: </P> <BR /> <P> Computer Configuration -- &gt; Preferences -- &gt; Windows Settings -- &gt; Shortcuts </P> <BR /> <P> Highlight Shortcuts and on the right pane, right click and select new Shortcut </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> In the ‘New Shortcut Properties’, make the following changes so the values look like below: </P> <BR /> <P> 1. Action : Update </P> <BR /> <P> 2. Target type : Shell Object </P> <BR /> <P> 3. Location : All Users Desktop </P> <BR /> <P> 4. For Target object, click on the browse option and then chose ‘Computer’ </P> <BR /> <P> 5. Name : My Computer </P> <BR /> <P> Leave rest of the options as default. Once you have made all the changes, it would look like below: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Similarly for Network the options are: </P> <BR /> <P> 1. Action : Update </P> <BR /> <P> 2. Target type : Shell Object </P> <BR /> <P> 3. Location : All Users Desktop </P> <BR /> <P> 4. For Target object, click on the browse option and then chose ‘Network’ </P> <BR /> <P> 5. Name : My Network Places </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> And for Libraries the options are: </P> <BR /> <P> 1. Action : Update </P> <BR /> <P> 2. Target type : Shell Object </P> <BR /> <P> 3. Location : All Users Desktop </P> <BR /> <P> 4. For Target object, click on the browse option and then chose ‘Libraries’ </P> <BR /> <P> 5. Name : My Documents </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> So we have the following three shortcuts </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Restart the client and once logged in with a domain user, the desktop would have the three shortcuts as listed above and it would look something like below: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The above steps also work with Windows 8.1. Here is how it looks: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Hope you all find this information useful. </P> <BR /> <P> Thanks, </P> <BR /> <P> Saurabh Koshta </P> </BODY></HTML> Fri, 05 Apr 2019 02:56:54 GMT TechCommunityAPIAdmin 2019-04-05T02:56:54Z An update for ADMT, and a few other things too. <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Dec 13, 2013 </STRONG> <BR /> <P> So, we’ve been quiet for a few months, which is extraordinarily embarrassing after I basically <A href="" target="_blank"> told everyone </A> that we were going to not do that. The reality of what we do in support is that sometimes it’s “All Hands on Deck”, which is where we’ve been lately. </P> <BR /> <P> At any rate, here’s some assorted news, updates, and announcements. Today we’re going to talk about ADMT, SHA-1, Folder Redirection, Roaming Profiles, STOP errors, and job opportunites. Yup, all in one big post. It’s not quite a mail sack but hopefully you all will find it interesting and or useful – especially the bit at the end. We’ll try to get the regular posts moving again asap as we get into 2014. </P> <BR /> <P> </P> <BR /> <P> <STRONG> ADMT OS Emancipation </STRONG> </P> <BR /> <P> <STRONG> Update coming to allow you to install on any supported server OS version </STRONG> </P> <BR /> <P> News just in: There’s an updated version of ADMT on the way that will allow you to install on newer OS versions. Here’s what we got from the ADMT product team: </P> <BR /> <P> In short, the update will allow ADMT to install on our newer OSs (both the ADMT and PES components). This should help alleviate some of the problems that customers have been reporting with the tool. We know that there are many of you who would like to see improvements or additional features in the tool beyond this, but we made the decision to focus this update on the OS compatibility issues, since that’s the thing that is impacting migrations the most right now.&nbsp; We currently do not have any plans for further updates after this one (beyond bug fixes). </P> <BR /> <P> The changes we have made require a fair bit of testing before we can release them – among other things, we have to test full-scale migrations against each combination of OS versions to make sure that nothing unexpected occurs.&nbsp; Once that testing is complete, we’ll publish the new version for public download, probably as an update to the existing 3.2 version.&nbsp; We don’t have an exact date right now, since it’s likely to take us a few months to finish our testing, but we’re hoping to have it out and available in the first quarter of 2014. </P> <BR /> <P> <STRONG> Update:&nbsp; The new version of the tool is available via the Connect site located <A href="#" target="_blank"> here </A> .&nbsp; To get it, you will have to log in with&nbsp;a Microsoft account and join the Azure AD Connection program.&nbsp; (This is just the name of the program, you don't actually have to be using Windows Azure Active Directory or anything like that). </STRONG> </P> <BR /> <P> </P> <BR /> <P> <STRONG> Out with the old (and the insecure) </STRONG> </P> <BR /> <P> <STRONG> We’ve announced the deprecation of SHA-1 algorithms </STRONG> </P> <BR /> <P> <EM> This one comes to us from former AskDS writer Mike Stephens. Mike changed roles last summer, and most of what he works on these days we can’t talk about – but some things we can: </EM> </P> <BR /> <P> Some of you may remember <A href="#" target="_blank"> this security advisory </A> where we announced the deprecation of RC4-based cryptographic algorithms. Some of you may also remember <A href="" target="_blank"> this blog post </A> from a few months ago where we talked about the upcoming deprecation of MD5-based algorithms. </P> <BR /> <P> Deprecation is a fancy word for “we don’t support it anymore moving forward, so you should look at turning it off.” </P> <BR /> <P> If you’re sensing a trend here, you’re not wrong. <A href="" target="_blank"> We just announced yesterday that we are planning the deprecation of SHA-1 algorithms. </A> </P> <BR /> <P> This means that moving forward, the minimum security you want on anything cryptographic is SHA-2 with a 2048-bit key. Those of you running certificate authorities should start planning on transitioning to stronger keys as soon as you can. Those you who have server or web applications in your environment (pretty much everyone) should start reviewing your applications to find any applications that are using weak certificates. Update them if you can, contact the application vendor if you can’t. </P> <BR /> <P> Just like the previous updates, we’re not going to issue a hotfix that turns off SHA-1 on all your servers and workstations. We know that there are lots of older applications out there that might need to be updated before your environments are ready for this kind of change so you are in control. What we will do is give you a KB article that tells you how to turn SHA-1 off when you’re ready. That, and we’ll turn it off by default in the next version of the OS. </P> <BR /> <P> That being said, two notes of caution. First, make sure you really, really check before disabling support for older cryptography algorithms in your environments. We’ve had a few cases where admins didn’t check the certificates their applications were using, and caused an outage with one or more of their applications when they turned off RC4. The point here is to test and verify application dependencies and compatibility before you make a widespread change. Second, have a plan to roll back the change if something you didn’t expect breaks. Finally, don’t wait to start transitioning your environment to stronger [using] crypto graphic algorithms. The longer your environment is using less secure cryptography, the more vulnerable you are to attacks. You can get ahead of the curve by updating your application requirements now to higher standards, and starting the work to transition your existing apps over to new certificates. </P> <BR /> <P> </P> <BR /> <P> <STRONG> One way, or the other… </STRONG> </P> <BR /> <P> <STRONG> Folder Redirection Group Policy doesn’t apply to Windows 8 and Windows 8.1 clients when you also configure it in System Center </STRONG> </P> <BR /> <P> <EM> This one comes to us from one of our tech leads, Kapil Chopra. Among many other duties, part of Kapil’s role is to watch for trends in the support cases that come into our frontline engineers, so that we can prioritize fixes that are affecting lots of customers. </EM> </P> <BR /> <P> I had a chance to work on multiple cases where in folder redirection doesn’t gets applied on Windows 8 and Windows 8.1. So I thought of posting the details to make sure that everyone is aware of the fact and should be able to resolve the issue. </P> <BR /> <P> <STRONG> <EM> In all the cases that I addressed, we see the below mentioned symptoms on the client </EM> </STRONG> <EM> : </EM> </P> <BR /> <P> 1. In the RSOP, under the properties of User Configuration, we see that the Folder Redirection settings got applied successfully. </P> <BR /> <P> 2. Under the RSOP, when we browse to the User Configuration &gt; Policies &gt; Windows Settings, we don’t see Folder Redirection folder. </P> <BR /> <P> 3. Under the GPRESULT /v output we see that the folder redirection setting is showing up as N/A. </P> <BR /> <P> 4. There is no failures reported under the Application / System logs. </P> <BR /> <P> 5. Group Policy Logging states that the policy is applied as mentioned below: </P> <BR /> <P> GPSVC(32c.b6c) 12:55:50:136 ProcessGPOs(User): Processing extension Folder Redirection <BR /> GPSVC(32c.b6c) 12:55:50:136 ReadStatus: Read Extension's Previous status successfully. <BR /> GPSVC(32c.b6c) 12:55:50:136 CompareGPOLists: The lists are the same. <BR /> GPSVC(32c.b6c) 12:55:50:136 CompareGPOLists: The lists are the same. <BR /> GPSVC(32c.b6c) 12:55:50:136 GPLockPolicySection: Sid = S-1-5-21-2130729834-1480738125-1508530778-62684, dwTimeout = 30000, dwFlags = 0x0 <BR /> - <BR /> - <BR /> GPSVC(32c.b6c) 12:55:50:136 ProcessGPOList: Entering for extension Folder Redirection <BR /> GPSVC(32c.b6c) 12:55:50:136 UserPolicyCallback: Setting status UI to Applying Folder Redirection policy... <BR /> GPSVC(32c.b6c) 12:55:50:136 ProcessGPOList: No changes. CSE will not be passed in the IwbemServices intf ptr <BR /> GPSVC(32c.43c) 12:55:50:136 Message Status = &lt;Applying Folder Redirection policy...&gt; <BR /> - <BR /> - <BR /> GPSVC(32c.b6c) 12:55:50:152 ProcessGPOList: Extension Folder Redirection returned 0x0. </P> <BR /> <P> 6. Under the folder redirection tracing, it isn't getting past fdeploy.dll into the shell components and is not even attempting to read the fdeploy.ini files. </P> <BR /> <P> From the above symptoms, it is pretty evident that there is something that is stopping the Folder Redirection engine from proceeding further. So we went ahead looked into the Folder Redirection operational logs under “Event viewer &gt; Application and Services Logs &gt; Microsoft &gt; Windows &gt; Folder Redirection &gt; Operational Logs”. </P> <BR /> <P> <STRONG> <EM> Under the Operational logs we found an interesting event which might be causing the problem: </EM> </STRONG> </P> <BR /> <P> Log Name: Microsoft-Windows-Folder Redirection/Operational <BR /> Source: Microsoft-Windows-Folder Redirection <BR /> Date: 11/4/2013 11:54:58 AM <BR /> Event ID: 1012 <BR /> Task Category: None <BR /> Level: Information <BR /> User: SYSTEM <BR /> Computer: <BR /> Description: <STRONG> Folder Redirection configuration is being controlled by WMI configuration class Win32_FolderRedirectionUserConfiguration. </STRONG> </P> <BR /> <P> In order to confirm if it’s only the Folder Redirection or other components as well getting controlled by WMI, we ran the powershell command “gwmi Win32_UserStateConfigurationControls” and found that all components i.e. Folder Redirection / Offline Files / Roaming User Profiles were controlled by WMI. <BR /> =================================================== <BR /> __GENUS : 2 <BR /> __CLASS : Win32_UserStateConfigurationControls <BR /> __SUPERCLASS : <BR /> __DYNASTY : Win32_UserStateConfigurationControls <BR /> __RELPATH : Win32_UserStateConfigurationControls=@ <BR /> __PROPERTY_COUNT : 3 <BR /> __DERIVATION : {} <BR /> __SERVER : WIN8TEST <BR /> __NAMESPACE : root\cimv2 <BR /> __PATH : \\WIN8TEST\root\cimv2:Win32_UserStateConfigurationControls=@ <BR /> FolderRedirection : 1 <BR /> OfflineFiles : 1 <BR /> RoamingUserProfile : 1 <BR /> PSComputerName : WIN8TEST <BR /> =================================================== </P> <BR /> <P> Now the question is, what this WMI Class “ <STRONG> Win32_FolderRedirectionUserConfiguration </STRONG> ” has to do with Folder redirection? </P> <BR /> <P> In order to answer that, everyone should be aware of the fact that with Windows 8 we have introduced new WMI classes to manage and query Folder Redirection and Remote User Profiles configuration using WMI controls. These WMI classes are mentioned below: </P> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Class </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> Explanation </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_FolderRedirection </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The redirection properties of a known folder </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_FolderRedirectionHealth </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The health of a known folder that is being redirected </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_FolderRedirectionHealthConfiguration </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The health configuration properties for a known folder that is being redirected </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_FolderRedirectionUserConfiguration </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The user's folder redirection configuration settings </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_RoamingProfileBackgroundUploadParams </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Represents a roaming profile background upload operation </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_RoamingProfileMachineConfiguration </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The roaming profile configuration for a computer </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_RoamingProfileSlowLinkParams </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The slow-link parameters for roaming profiles </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_RoamingProfileUserConfiguration </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Represents a roaming profile user configuration </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_RoamingUserHealthConfiguration </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Represents health configuration properties for all roaming user profiles on a computer </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_UserProfile </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Represents a user profile </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Win32_UserStateConfigurationControls </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Contains properties that control the user state configuration for a computer. The property value settings for this class determine whether Group Policy or WMI should be the configuration mechanism for user state components. </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <P> <STRONG> So, the big question is - who is giving the control on FR/CSC/RUP to WMI? </STRONG> <BR /> In all the cases that we have dealt with, we found that the machines were deployed and managed using SCCM. So there might be something in the SCCM configuration which is changing the default behavior and passing on the control to WMI. We looked into the System Center Configuration Manager and found the setting which might be causing all the pain. The exact configuration is mentioned below: </P> <BR /> <P> Under the SCCM Configuration manager, <BR /> - Select Administration <BR /> - Select Client Settings </P> <BR /> <P> <BR /> <IMG src="" /> </P> <BR /> <P> - Pull up PROPERTIES of Default Client Settings configuration and click on Compliance Settings </P> <BR /> <P> <IMG src="" /> <BR /> - <STRONG> Enable User Data and Profiles </STRONG> mentioned above is the setting which drives the control of Folder Redirection and Remote User Profiles. </P> <BR /> <P> The above configuration by Default is set to NO. Once enabled (set to YES), it passes the control of Folder Redirection, Offline Files, and Remote User Profiles to WMI and stores this configuration under the registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\UserState\UserStateTechnologies\ConfigurationControls </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> This is evident from the fact that FolderRedirection, OfflineFiles, and RoamingUserProfiles registry entry mentioned in the above snippet is set to 1. </P> <BR /> <P> <EM> More details about Managing UserState via System Center Configuration is documented under the </EM> articles mentioned below: <BR /> - How to Create User Data and Profiles Configuration Items in Configuration Manager : <A href="#" target="_blank"> </A> <BR /> - Example Scenario for User Data and Profiles Management in Configuration Manager : <A href="#" target="_blank"> </A> </P> <BR /> <P> <STRONG> <EM> RESOLUTION </EM> </STRONG> </P> <BR /> <P> To resolve the issue we need to change the value of “Enable User Data and Profiles” to NO under the Compliance settings in SCCM Configuration. </P> <BR /> <P> Another important fact that I need to point out is, changing the value of above registry entries to “0” will resolve the issue for a while on a client but the registry entries will automatically be flipped to 1 once the SCCM configuration client piece gets executed on the Win8 or Win8.1 machines. By default, this configuration runs every hour to pull changes from the System Center Configuration Manager server. So you have to make the change in System Center if you want it to stick. </P> <BR /> <P> Most customers don’t realize what they are doing when they set this value to YES, so they will want to make sure it is set to NO in their environments. If a customer does want to use it, then they will need to make sure they are managing Folder Redirection through WMI and not through Group Policy or they will run into the problems mentioned above. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Getting Rid of Pesky STOP Errors </STRONG> </P> <BR /> <P> <STRONG> Hotfix released to correct a crash in TCP/IP. </STRONG> </P> <BR /> <P> <A href="#" target="_blank"> Here is a fix </A> you will want to test and then deploy to your servers as soon as you can. For the past few months we have been tracking a large number of cases where servers would crash (blue screen) with a STOP 0xD1 error. We’ve been tracking this issue for a long time, but we were never able to figure out what exactly caused it because it only happened under specific circumstances on multiprocessor computers It took us so long to figure this out. Those conditions used to be pretty rare but as multiprocessor computers are now the norm, problem frequency has&nbsp;increased.&nbsp; We now have a <A href="#" target="_blank"> hotfix for Windows Server 2008 R2 </A> available just in time for the holidays. </P> <BR /> <P> Windows Server 2012 and Windows Server 2012 R2 versions of the same hotfix are being tested and will be released in January 2014 if the testing pans out. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Making the kids play nice together </STRONG> </P> <BR /> <P> <STRONG> Roaming profiles now coexist properly for Windows 7, Windows 8 and Windows 8.1 computers. </STRONG> </P> <BR /> <P> Some of you may recall <A href="" target="_blank"> a blog post </A> from one of our friends in PFE about problems roaming user profiles between computers running Windows 7 and Windows 8. In the original blog, we presented a workaround that, while it helped, was not really a fix for the issue. </P> <BR /> <P> Well, now we have a fix for the issue. <A href="#" target="_blank"> Two </A> <A href="#" target="_blank"> of them </A> , rather. I’ll explain. </P> <BR /> <P> To set expectations: Windows 8 uses a new profile format (just like Windows 7 had a new format when compared to XP). Windows 8.1 uses a third (or fourth) new profile format. So, if you want to move data <STRONG> between </STRONG> computers running Windows 7, Windows 8, and Windows 8.1, you will need to use Folder Redirection…. OR you can consider using the cool new feature called Work Folders, which we’ll be adding support for Windows 7 in the coming months. But if you don’t do one of these things, then the two profiles are separate – no data gets shared between the two OS versions. </P> <BR /> <P> <A href="#" target="_blank"> KB 2887239 </A> and <A href="#" target="_blank"> KB 2890783 </A> allow roaming profiles to “roam” properly even if you’re in a mixed OS environment. That means users will be able to log in seamlessly to different devices without having to follow the workaround mentioned in Mark’s blog post. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Last, but definitely not least:&nbsp; We’re hiring. </STRONG> </P> <BR /> <P> I mentioned at the start of this that the last few months for us in DS (and really in all of support) have been “All Hands on Deck”. And while things slow down a little over the holidays, we have more work to do than we have hands and minds to do it right now. <STRONG> </STRONG> </P> <BR /> <P> So we’re hiring. If you’re the sort of person who enjoys fixing hard problems, who likes getting into the guts of how software works, and who’s not afraid to constantly be asked to learn something new, you really should check out our <A href="#" target="_blank"> careers page </A> . There are positions available in Charlotte, NC, in Las Colinas, TX, and in Fargo, ND. And this isn’t just for DS – it’s for all of our Windows support teams (and others). If you’re interested, look for Support Engineer positions and send in your resume. </P> <BR /> <P> What we do is possibly the most technically demanding and challenging infrastructure job there is. Every day we work on problems that impact hundreds or thousands of users out there in the world, and some with more impact with that. It’s not an *easy* job. But it is a very fulfilling one. </P> <BR /> <P> If you’re interested in applying, check out these two blog posts we put up a while back, and we’ll look forward to talking to you. </P> <BR /> <P> <A href="" target="_blank"> Post-Graduate AD Studies </A> </P> <BR /> <P> <A href="" target="_blank"> Accelerating Your IT Career </A> </P> </BODY></HTML> Fri, 05 Apr 2019 02:55:25 GMT TechCommunityAPIAdmin 2019-04-05T02:55:25Z Locked or not? Demystifying the UI behavior for account lockouts <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 01, 2013 </STRONG> <BR /> <P> Hello Everyone, </P> <BR /> <P> This is Shijo from our team in Bangalore once again.&nbsp; Today I’d like to briefly discuss account lockouts, and some UI behaviors that can trip admins up when dealing with account lockouts. </P> <BR /> <P> If you’ve ever had to troubleshoot an account lockout issue, you might have noticed that sometimes accounts appear to be locked on some domain controllers, but not on others.&nbsp; This can be very confusing since you <BR /> typically know that the account has been locked out, but when you inspect individual DCs, they don’t reflect that status.&nbsp; This inconsistency happens because of some minor differences in the behavior of the UI between Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012. </P> <BR /> <P> <STRONG> Windows Server 2003 </STRONG> </P> <BR /> <P> In Windows Server 2003 the "Account is locked out" checkbox can be cleared ONLY if the account is locked out on the domain controller you are connected to <STRONG> . This means that if an account has been locked out, but the local DC has not yet replicated that information, you CANNOT unlock the account on the local DC. </STRONG> </P> <BR /> <P> <STRONG> <IMG src="" /> </STRONG> </P> <BR /> <P> <STRONG> <EM> Windows 2003 account properties for an unlocked account.&nbsp; Note that the checkbox is grayed out. </EM> </STRONG> </P> <BR /> <P> <STRONG> </STRONG> </P> <BR /> <P> <STRONG> Windows Server 2008 and Windows Server 2008 R2 </STRONG> </P> <BR /> <P> In Windows Server 2008/2008 R2 the "Unlock account" checkbox will always be available (regardless of the status of the account). You can tell whether the local DC knows if the account is locked out by looking at the label on the checkbox as shown in the screenshots below: </P> <BR /> <P> <STRONG> <IMG src="" /> </STRONG> </P> <BR /> <P> <STRONG> <EM> Windows 2008 account properties showing the “Unlock Account” checkbox.&nbsp; Notice that the checkbox is available regardless of the status of the account on the local DC. </EM> </STRONG> </P> <BR /> <P> <STRONG> </STRONG> </P> <BR /> <P> <STRONG> <IMG src="" /> </STRONG> </P> <BR /> <P> <STRONG> <EM> Windows 2008 (and higher) Account Properties dialog box showing locked account on this domain controller </EM> </STRONG> </P> <BR /> <P> <STRONG> </STRONG> </P> <BR /> <P> If the label on the checkbox&nbsp;is just "Unlock account" then this means that the domain controller you are connected to recognizes the account as unlocked. This does <STRONG> NOT </STRONG> mean that the account is not locked on other DCs, just that the specific DC we're working with has not replicated a lockout status yet.&nbsp; However, unlike Windows Server 2003, if the local DC doesn’t realize that the account is locked, you <STRONG> DO </STRONG> have ability to unlock it from this interface by checking the checkbox and applying the change. </P> <BR /> <P> We changed the UI behavior in Windows Server 2008&nbsp;to help administrators in large environments unlock accounts faster when required, instead of having to wait for replication to occur, then unlock the account, and then wait for replication to occur again. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Windows Server 2012 </STRONG> </P> <BR /> <P> We can also unlock the accounts using the Active Directory Administrative Center (available in Windows Server 2008 R2 and later).&nbsp; In Windows Server 2012, this console is the preferred method of managing accounts in Active Directory. The screen shots are present below about how we can go about doing that. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> You can see from the account screenshot that the account is locked which is denoted by the&nbsp;padlock symbol. To unlock the account you would have to click on the “Unlock account” tab and you would see a <BR /> change in the symbol as can be seen below. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> You&nbsp;can also unlock the account using the&nbsp;PowerShell command&nbsp;shown in the screenshot below. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> In this example, I&nbsp;have unlocked the user account named test by simply specifying the DN of the account to unlock&nbsp;. You can modify your powershell command to incorporate many more switches, the details of which are present in the <BR /> following article. </P> <BR /> <P> <A href="#" target="_blank"> </A> </P> <BR /> <P> Hopefully this helps explain why the older operating systems behave slightly differently from the newer ones, and will help you the next time you have to deal with an account that is locked out in your environment! </P> <BR /> <P> </P> <BR /> <P> If you’re looking for more information on Account Lockouts, check out the following links: </P> <BR /> <P> <A href="#" title="Troubleshooting Account Lockout" target="_blank"> Troubleshooting Account Lockout </A> </P> <BR /> <P> <A href="#" target="_blank"> Account Lockout Policy </A> </P> <BR /> <P> <A href="#" target="_blank"> Account Lockout Management Tools </A> </P> <BR /> <P> </P> <BR /> <P> <STRONG> Shijo “UNLOCK IT” Joy </STRONG> </P> </BODY></HTML> Fri, 05 Apr 2019 02:54:52 GMT TechCommunityAPIAdmin 2019-04-05T02:54:52Z Important Announcement: AD FS 2.0 and MS13-066 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Aug 15, 2013 </STRONG> <BR /> <STRONG> Update (8/19/13): </STRONG> <BR /> <BR /> We have republished MS13-066 with a corrected version of the hotfixes that contributed to this problem.&nbsp; If you had held off on installing the update, it should be safe to install on all of your ADFS servers now. <BR /> <BR /> <BR /> <BR /> The updated security bulletin is here: <A href="#" target="_blank"> </A> <BR /> <BR /> <BR /> <BR /> Thanks everyone for your patience with this one.&nbsp; If anyone is still having trouble after installing the re-released update, please call us and open a support case so that our engineers can get you working again! <BR /> <BR /> =============================================================== <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> Hi everyone, <STRONG> Adam </STRONG> and <STRONG> JR </STRONG> here with an important announcement. <BR /> <BR /> We’re tracking an important issue in support where some customers who have installed security update MS13-066 on their AD FS 2.0 servers are experiencing authentication outages.&nbsp; This is due to a dependency within the security update on certain versions of the AD FS 2.0 binaries.&nbsp; Customers who are already running ADFS 2.0 RU3 before installing the update should not experience any issues. <BR /> <BR /> We have temporarily suspended further downloads of this security update until we have resolved this issue for all ADFS 2.0 customers. <BR /> <BR /> Our Security and AD FS product team are working together to resolve this with their highest priority.&nbsp; We’ll have more news for you soon in a follow-up post.&nbsp; In the meantime, here is what we can tell you right now. <BR /> <BR /> <BR /> <BR /> <STRONG> What to Watch For </STRONG> <BR /> <BR /> If you have installed KB 2843638 or KB 2843639 on your AD FS server, you may notice the following symptoms: <BR /> <BR /> <OL> <LI> Federated sign-in fails for clients. </LI> <LI> <STRONG> Event ID&nbsp;111 </STRONG> in the <STRONG> AD FS 2.0/Admin </STRONG> event log: </LI> </OL> <BR /> <BR /> The Federation Service encountered an error while processing the WS-Trust request. <BR /> <BR /> Request type: <A href="#" target="_blank"></A> <BR /> <BR /> Additional Data <BR /> <BR /> Exception details: <BR /> <BR /> System.Reflection.TargetInvocationException: <STRONG> Exception has been thrown by the target of an invocation. ---&gt; System.TypeLoadException: Could not load <BR /> type ‘Microsoft.IdentityModel.Protocols.XmlSignature.AsymmetricSignatureOperatorsDelegate' from assembly 'Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. </STRONG> <BR /> <BR /> at Microsoft.IdentityServer.Service.SecurityTokenService.MSISSecurityTokenService..ctor(SecurityTokenServiceConfiguration securityTokenServiceConfiguration) <BR /> <BR /> --- End of inner exception stack trace --- <BR /> <BR /> at System.RuntimeMethodHandle._InvokeConstructor(Object[] args, SignatureStruct&amp; signature, IntPtr declaringType) <BR /> <BR /> at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) <BR /> <BR /> at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes) <BR /> <BR /> at Microsoft.IdentityModel.Configuration.SecurityTokenServiceConfiguration.CreateSecurityTokenService() <BR /> <BR /> at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.CreateSTS() <BR /> <BR /> at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.CreateDispatchContext(Message requestMessage, String requestAction, String responseAction, String <BR /> trustNamespace, WSTrustRequestSerializer requestSerializer, WSTrustResponseSerializer responseSerializer, WSTrustSerializationContext serializationContext) <BR /> <BR /> at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.BeginProcessCore(Message requestMessage, WSTrustRequestSerializer requestSerializer, WSTrustResponseSerializer responseSerializer, String requestAction, String responseAction, String trustNamespace, AsyncCallback callback, Object state) <BR /> <BR /> System.TypeLoadException: Could not load type 'Microsoft.IdentityModel.Protocols.XmlSignature.AsymmetricSignatureOperatorsDelegate' from assembly 'Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. <BR /> <BR /> at Microsoft.IdentityServer.Service.SecurityTokenService.MSISSecurityTokenService..ctor(SecurityTokenServiceConfiguration securityTokenServiceConfiguration) <BR /> <BR /> <BR /> <BR /> <P> <STRONG> What to do if the problem occurs: </STRONG> </P> <OL> <LI> Uninstall the hotfixes from your AD FS servers. </LI> <LI> Reboot any system where the hotfixes were <BR /> removed. </LI> <LI> Check back here for further updates. </LI> </OL> <P> We’ll update this blog post with more information as it becomes available, including links to any followup posts about this problem. </P> </BODY></HTML> Fri, 05 Apr 2019 02:53:51 GMT TechCommunityAPIAdmin 2019-04-05T02:53:51Z MD5 Signature Hash Deprecation and Your Infrastructure <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Aug 14, 2013 </STRONG> <BR /> <P> Hi everyone, <STRONG> David </STRONG> here with a quick announcement. </P> <BR /> <P> Yesterday, MSRC announced a&nbsp;timeframe for deprecation of built-in support for&nbsp;certificates that use the&nbsp;MD5 signature hash. You can find more information here: </P> <BR /> <P> <A href="#" target="_blank"> </A> </P> <BR /> <P> Along with this announcement,&nbsp;we've released a framework which allows enterprises to test their environment for certificates that might be blocked as part of the upcoming changes ( <STRONG> <A href="#" target="_blank"> Microsoft Security Advisory 2862966 </A> ) </STRONG> . This framework also allows future deprecation of other weak cryptographic algorithm to be streamlined and managed via registry updates (pushed via Windows Update). </P> <BR /> <P> <STRONG> </STRONG> </P> <BR /> <P> <STRONG> Some Technical Specifics: </STRONG> </P> <BR /> <P> This change affects certificates that are used for the following: </P> <BR /> <UL> <BR /> <LI> server authentication </LI> <BR /> <LI> code signing </LI> <BR /> <LI> time stamping </LI> <BR /> <LI> Other certificate usages that used MD5 signature hash algorithm will NOT be blocked. </LI> <BR /> </UL> <BR /> <P> For&nbsp;code signing certificates, we will allow signed binaries that were signed before March 2009 to continue to work, even if the signing cert used MD5 signature hash algorithm. </P> <BR /> <P> <STRONG> Note: </STRONG> Only certificates issued under a root CA in the Microsoft Root Certificate program are affected by this change.&nbsp; Enterprise issued certificates are not affected (but should still be updated). </P> <BR /> <P> </P> <BR /> <P> <STRONG> What this means for you: </STRONG> </P> <BR /> <P> 1) If you're using certificates that have an MD5 signature hash (for example, if you have older web server certificates that used this hashing algorithm), you will need to update those certificates as soon as possible.&nbsp; The update is planned to release in February 2014; make sure anything you have that is internet facing has been updated by then. </P> <BR /> <P> You can find out what signature hash was used on a certificate by simply pulling up the details of that certificate's public key on any Windows 8 or Windows Server 2012 machine.&nbsp; Look for the signature hash algorithm that was used. (The certificate in my screenshot uses sha1, but you will see md5 listed on certificates that use it). </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> If you are on Server Core or have an older OS, you can see the signature hash algorithm by using <STRONG> certutil -v </STRONG> against the certificate. </P> <BR /> <P> 2) Start double-checking your internal applications and certificates to insure that you don't have something older that's using an MD5 hash.&nbsp; If you find one, update it (or contact the vendor to have it updated). </P> <BR /> <P> 3) Deploy <A href="#" target="_blank"> KB 2862966 </A> in your test and QA environments and use it to test for weaker hashes (You are using test and QA environments for your major applications, right?).&nbsp; The update allows you to implement logging to see what would be affected by restricting a hash.&nbsp; It's designed to allow you to get ahead of the curve and find the potential weak spots in your environment. </P> <BR /> <P> Sometimes security announcements like this can seem a little like overkill, but remember that&nbsp;your certificates are only as strong as the hashing algorithm used to generate the private key.&nbsp; As computing power increases, older hashing algorithms become easier for attackers to crack, allowing them to more easily fool computers and applications into allowing them access or executing code.&nbsp; We don't release updates like this lightly, so make sure you take the time to inspect your environments and fix the weak links, before some attacker out there tries to use them against you. </P> <BR /> <P> <STRONG> --David "Security is everyone's business" Beach </STRONG> </P> </BODY></HTML> Fri, 05 Apr 2019 02:53:44 GMT TechCommunityAPIAdmin 2019-04-05T02:53:44Z DFS Replication in Windows Server 2012 R2 and other goodies, now available on the Filecab blog! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jul 31, 2013 </STRONG> <BR /> Over at the <A href="#" title="" target="_blank"> Filecab </A> blog, AskDS alum and all-around nice guy <STRONG> Ned Pyle </STRONG> has posted the first of several blogs about new features coming your way in Windows Server 2012 R2.&nbsp; If you're a DFS administrator or just curious, go take a look! <P> Ned promises more posts in the near future, and&nbsp;Filecab is near and dear to our hearts here in DS (they make a bunch of things we support), so if you don't already have it on your RSS feed list, it might be a good time to add it. </P> </BODY></HTML> Fri, 05 Apr 2019 02:53:29 GMT TechCommunityAPIAdmin 2019-04-05T02:53:29Z Roaming Profile Compatibility - The Windows 7 to Windows 8 Challenge <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jul 31, 2013 </STRONG> <BR /> <P> <EM> [Editor's note:&nbsp; Everything Mark mentions for Windows 8 clients here is also true for Windows 8.1 clients.&nbsp; Windows 8 and Windows 8.1 clients use the same (v3) profile version, so the 8.1 upgrade will not prevent this from happening if you have roaming profiles in your environment.&nbsp; Something to be aware of if you're planning to migrate users over to the new OS version. -David] </EM> </P> <BR /> <P> </P> <BR /> <P> Hi. It’s Mark Renoden, Senior Premier Field Engineer in Sydney, Australia here again. Today I’ll offer a workaround for an issue that’s causing a number of customers around the world a degree of trouble. It turns out to be reasonably easy to fix, perhaps just not so obvious. </P> <BR /> <P> <STRONG> The Problem </STRONG> </P> <BR /> <P> The knowledge base article <STRONG> <EM> "Unpredictable behavior if you migrate a roaming user </EM> <EM> profile from Windows 8 to Windows 7" </EM> </STRONG> - <A href="#" target="_blank"> </A> states: </P> <BR /> <P> <EM> Windows 7 and Windows 8 use similar user profile formats, which do not support interoperability when they roam between computers that are running different versions of Windows. When a user who has a&nbsp; windows 7 profile signs in to a Windows 8-based computer for the first time, the user profile is updated to the new Windows 8 format. After this occurs, the user profile is no longer compatible with Windows 7-based computers. See the "More information" section for detailed information about how this issue affects roaming and mandatory profiles. </EM> </P> <BR /> <P> This sort of problem existed between Windows XP and Windows Vista/7 but was mitigated by Windows Vista/7 using a profile that used a .v2 extension.&nbsp; The OS would handle storing the separate profiles automatically for you when roaming between those OS versions.&nbsp; With Windows 7 and Windows 8, both operating systems use roaming profiles with a .v2 extension, even though Windows 8 is actually writing the profile in a newer format. </P> <BR /> <P> <STRONG> Mark’s Workaround </STRONG> </P> <BR /> <P> The solution is to use separate roaming profiles for each operating system by utilizing an environment variable in the profile path. </P> <BR /> <P> <STRONG> Configuration </STRONG> </P> <BR /> <P> File server for profiles: </P> <BR /> <OL> <BR /> <LI> Create profile share “\\Server\ProfilesShare” with permissions configured so that users have write access </LI> <BR /> <LI> In ProfilesShare, create folders “Win7” and “Win8” </LI> <BR /> </OL> <BR /> <P> <IMG src="" /> <BR /> </P> <BR /> <P> Active Directory: </P> <BR /> <OL> <BR /> <LI> Create OU for Windows 7 Clients (say “Win7OU”) and create/link a GPO here (say “Win7GPO”) </LI> <BR /> <LI> Create OU for Windows 8 Clients (say “Win8OU”) and create/link a GPO here (say “Win8GPO”) </LI> <BR /> </OL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Note:As an alternative to separate OUs, a WMI filter may be used to filter according to Operating System: </P> <BR /> <P> Windows 7 - SELECT <STRONG> version FROM Win32_OperatingSystem WHERE Version LIKE “6.1%” and ProductType = “1″ </STRONG> </P> <BR /> <P> Windows 8 - SELECT <STRONG> version FROM Win32_OperatingSystem WHERE Version LIKE “6.2%” and ProductType = “1″ </STRONG> </P> <BR /> <P> 3. Edit Win7GPO </P> <BR /> <OL> <OL> <BR /> <LI> Expand Computer Configuration -&gt; Preferences -&gt; Windows Settings </LI> <BR /> <LI> Under Environment create an environment variable with </LI> <BR /> <OL> <BR /> <LI> Action: Create </LI> <BR /> <LI> System Variable </LI> <BR /> <LI> Name: OSVer </LI> <BR /> <LI> Value: Win7 </LI> <BR /> </OL> </OL> </OL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 4. Edit Win8GPO </P> <BR /> <OL> <OL> <BR /> <LI> Expand Computer Configuration -&gt; Preferences -&gt; Windows Settings </LI> <BR /> <LI> Under Environment create an environment variable with </LI> <BR /> <OL> <BR /> <LI> Action: Create </LI> <BR /> <LI> System Variable </LI> <BR /> <LI> Name: OSVer </LI> <BR /> <LI> Value: Win8 </LI> <BR /> </OL> </OL> </OL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 5. Set user profile paths to <A> \\Server\ProfilesShare\%OSVer%\%username%\ </A> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> Clients: </P> <BR /> <OL> <BR /> <LI> Log on with administrative accounts first to confirm creation of the OSVer environment variable </LI> <BR /> </OL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> 2. Log in as users and you’ll observe that different user profiles are created in the appropriate folder in the profiles share depending on client OS </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <STRONG> Conclusion </STRONG> </P> <BR /> <P> I haven't run into any issues in testing but this might be one of those cases where it's important to use "wait for network". My testing suggests that using "create" as the action on the environment variable mitigates any timing issues.&nbsp; This is because after the environment variable is created for the machine, this variable persists across boots and doesn't depend on GPP re-application. </P> <BR /> <P> You may also wish to consider the use (and testing) of a folder redirection policy to provide users with their data as they cross between Windows 7 and Windows 8 clients. While I have tested this to work with <BR /> “My Documents”, there may be varying degrees of success here depending on how Windows 8’s modern apps fiddle with things. </P> <BR /> <P> - Mark “Square Peg in a Round Hole” Renoden </P> <BR /> <P> </P> <BR /> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 02:53:22 GMT TechCommunityAPIAdmin 2019-04-05T02:53:22Z Because TechNet didn't have enough Active Directory awesomeness already <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jul 15, 2013 </STRONG> <BR /> Time for a quick lesson in blog history.&nbsp; There'll be a quiz at the end!&nbsp; Ok not really, but some history all the same. <BR /> <BR /> Back a few years ago when we here at Microsoft were just starting to get savvy to this whole blog thing, one of our support escalation engineers, <A href="#" title="" target="_blank"> Tim Springston </A> , decided to start up a blog about Active Directory.&nbsp; You might have seen it in the past.&nbsp; Over the years he's posted some really great insights and posts there that are definitely worth reading if you have the time. <BR /> <BR /> Of course, the rest of us decided to do something completely different and started up AskDS a little later.&nbsp; Rumor has it that it had something to do with a high-stakes poker game (Tim *is* from Texas, after all),&nbsp;but no one is really sure why we wound up with two support blogs to be honest - it's one of those things that just sort of happened. <BR /> <BR /> Anyway, all this time while we've been partying it up over here on TechNet, our AD product team has been marooned over on MSDN with an audience of mostly developers.&nbsp; Not that developers are bad folks&nbsp;- after all, they make the apps that power pretty much everything - but the truth is that a lot of what we do in Active Directory in terms of feature development is also targeted at Administrators and Architects and IT Pros.&nbsp; You know, the people who read blogs on TechNet and may not think to also check MSDN. <BR /> <BR /> After a lot of debate and discussion internally, the AD product team came to the conclusion that they really should have a presence on TechNet so that they could talk to everyone here about the cool features they're working on. <BR /> <BR /> The problem?&nbsp; Well, we sort of had a monopoly over here in support on AD-related blog names. :) <BR /> <BR /> Meetings were convened.&nbsp; Conferences were held.&nbsp; Email flew back and forth.&nbsp; Their might even have been some shady dealings involving gifts of sugary pastries.&nbsp; In the end though, Tim graciously agreed to move his blogging efforts over to AskDS and cede control of <A href="#" target="_blank"> </A> to the Active Directory Product team. <BR /> <BR /> The result?&nbsp; Everyone wins.&nbsp; Tim's now helping us write cool stuff for AskDS (you'll see plenty of that in the near future, I'm sure), and the product team has <A href="#" target="_blank"> already started </A> <A href="#" target="_blank"> posting </A> <A href="#" target="_blank"> a bunch </A> <A href="#" target="_blank"> of things </A> that you might have missed when they were on MSDN. <BR /> <BR /> If you haven't seen what they're up to over there, go and <A href="#" target="_blank"> take a look </A> .&nbsp; And as we get out of summer and get our people back from vacation, and, you know, roll a whole new server OS out the door, keep an eye on both blogs for updates, tips, explanations, and all manner of yummy AD-related goodness. <BR /> <BR /> <BR /> <BR /> <P> --David "Wait,&nbsp;we get another writer for AskDS??" Beach </P> </BODY></HTML> Fri, 05 Apr 2019 02:52:19 GMT TechCommunityAPIAdmin 2019-04-05T02:52:19Z Interesting findings on SETSPN -x -f <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jul 01, 2013 </STRONG> <BR /> <P> Hello folks, this is <A href="#" target="_blank"> Herbert </A> from the Directory Services support team in Europe! </P> <BR /> <P> Kerberos is becoming increasingly mandatory for really cool features such as <A href="#" target="_blank"> Protocol Transition </A> .&nbsp; Moreover, as you might be painfully aware, managing Service Principal Names (SPN’s) for the use of Kerberos by applications can be daunting at times. </P> <BR /> <P> In this blog, we will not be going into the <A href="#" target="_blank"> gory details of SPNs </A> and how applications are using them. In fact, I’m assuming you already have some basic knowledge about SPN’s and how they are used. </P> <BR /> <P> Instead, we’re going to talk about an interesting behavior that can occur when an administrator is doing their due diligence managing SPN’s.&nbsp; This behavior can arise when you are checking the status of the account the SPN is planned for, or when you are checking to see if the SPN that must be registered is already registered in the domain or forest. </P> <BR /> <P> As we all know, the KDC’s cannot issue tickets for a particular service if there are duplicate SPN’s, and authentication does not work if the SPN is on the wrong account. </P> <BR /> <P> Experienced administrators learn to use the SETSPN utility to validate SPNs when authentication problems occur.&nbsp; In the Windows Server 2008 version of SETSPN, we provide several options useful to identifying duplicate SPNs: </P> <BR /> <P> -&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you want to look for a duplicate of a particular SPN: <STRONG> SETSPN /q &lt;SPN&gt; </STRONG> </P> <BR /> <P> -&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you want to search for any duplicate in the domain: <STRONG> SETSPN /x </STRONG> </P> <BR /> <P> You can also use the “ <STRONG> /f </STRONG> ” option to extend the duplicate search to the whole Forest. Many Active Directory Admins use this as a proactive check of the forest for duplicate SPNs. </P> <BR /> <P> So far, so good… </P> <BR /> <H2> The Problem </H2> <BR /> <P> Sometimes, you’ll get an error running SETSPN -x -f: </P> <BR /> <P> <STRONG> c:\&gt;SETSPN -X -F -P <BR /> Checking forest DC=contoso,DC=com <BR /> Operation will be performed forestwide, it might take a while. <BR /> </STRONG> <STRONG> Ldap Error(0x55 -- Timeout): ldap_get_next_page_s </STRONG> </P> <BR /> <P> “-P” just tells the tool not to clutter the output with progress indications, but you can see from that error message that we are not talking only about Kerberos anymore. There is a new problem. </P> <BR /> <P> </P> <BR /> <H2> What are we seeing in the diagnostic data? </H2> <BR /> <P> In a network trace of the above&nbsp;you will see a query against the GC (port 3268) with no base DN and the filter <STRONG> “ </STRONG> <STRONG> (servicePrincipalName=*) </STRONG> ”. SETSPN uses paged queries with a page size of 100 objects. In a large Active Directory environment this yields quite a number of pages. </P> <BR /> <P> If you look closely at network capture data, you’ll often find that Domain Controller response times slowly increase towards the end of the query. If the command completes, you’ll sometimes see that the delay is longest on the last page returned. For example, when we reviewed data for a recent customer case, we noted: </P> <BR /> <P> <STRONG> ”Customer also noticed that it usually hangs on record 84.” </STRONG> </P> <BR /> <P> </P> <BR /> <P> Troubleshooting LDAP performance and building custom queries calls for the use of the <A href="#" target="_blank"> STATS Control </A> . Here is how you use it in LDP.exe: </P> <BR /> <P> Once connected to port 3268 and logged on as an admin, you can build the query in the same manner as SETSPN does. </P> <BR /> <P> 1. Launch LDP as an administrator. </P> <BR /> <P> 2. Open the Search Window using Browse\Search or Ctrl-S. </P> <BR /> <P> 3. Enter the empty base DN and the filter, specify “Subtree” as the scope. The list of attributes does not matter here. <BR /> <BR /> <IMG src="" /> </P> <BR /> <P> 4. Go to Options: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> 5. Specify an “Extended” query as we want to use controls. Note I have specified a page size of 100 elements, but that is not important, as we will see later. Let’s move on to “Controls”: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <BR /> 5. From the List of Controls select “Search Stats“. When you select it, it automatically checks it in. </P> <BR /> <P> 6. Now “OK” your way out of the “Controls” and “Options” dialogs. </P> <BR /> <P> 7. Hit “Run” on the “Search” dialog. </P> <BR /> <P> </P> <BR /> <P> You should get a large list of results, but also the STATS a bit like this one: </P> <BR /> <P> </P> <BR /> <P> <STRONG> Call Time: 62198 (ms) </STRONG> </P> <BR /> <P> <STRONG> Entries Returned: 8508 </STRONG> </P> <BR /> <P> <STRONG> Entries Visited: 43076 </STRONG> </P> <BR /> <P> <STRONG> Used Filter: (servicePrincipalName=*) </STRONG> </P> <BR /> <P> <STRONG> Used Indices: idx_servicePrincipalName:13561:N </STRONG> </P> <BR /> <P> <STRONG> Pages Referenced&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 801521 </STRONG> </P> <BR /> <P> <STRONG> Pages Read From Disk&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 259 </STRONG> </P> <BR /> <P> <STRONG> Pages Pre-read From Disk&nbsp; : 1578 </STRONG> </P> <BR /> <P> <STRONG> Pages Dirtied&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 0 </STRONG> </P> <BR /> <P> <STRONG> Pages Re-Dirtied&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 0 </STRONG> </P> <BR /> <P> <STRONG> Log Records Generated&nbsp;&nbsp;&nbsp;&nbsp; : 0 </STRONG> </P> <BR /> <P> <STRONG> Log Record Bytes Generated: 0 </STRONG> </P> <BR /> <P> <STRONG> </STRONG> </P> <BR /> <P> <STRONG> What are these stats telling us? </STRONG> </P> <BR /> <P> We have a total of 8508 objects in the “Entries Returned” result set, but we have visited 43076 objects. That sounds odd, because we used an Index “ <STRONG> idx_servicePrincipalName </STRONG> ”. This does not really look as if the query is using the index. </P> <BR /> <P> <STRONG> </STRONG> </P> <BR /> <P> <STRONG> So what is happening here? </STRONG> </P> <BR /> <P> At this point, we experience the special behavior of multi-valued non-linked attributes and how they are represented in the index. To illustrate this, let me explain a few data points: </P> <BR /> <P> </P> <BR /> <P> 1. A typical workstation or member server&nbsp;has these SPNs: </P> <BR /> <P> <STRONG> servicePrincipalName: <BR /> WSMAN/herbertm5 </STRONG> </P> <BR /> <P> <STRONG> servicePrincipalName: <BR /> WSMAN/ </STRONG> </P> <BR /> <P> <STRONG> servicePrincipalName: <BR /> TERMSRV/ </STRONG> </P> <BR /> <P> <STRONG> servicePrincipalName: <BR /> TERMSRV/HERBERTM5 </STRONG> </P> <BR /> <P> <STRONG> servicePrincipalName: <BR /> RestrictedKrbHost/HERBERTM5 </STRONG> </P> <BR /> <P> <STRONG> servicePrincipalName: <BR /> HOST/HERBERTM5 </STRONG> </P> <BR /> <P> <STRONG> servicePrincipalName: <BR /> RestrictedKrbHost/ </STRONG> </P> <BR /> <P> <STRONG> servicePrincipalName: <BR /> HOST/ </STRONG> </P> <BR /> <P> </P> <BR /> <P> 2. When you look at the result set from running setspn, you notice that you’re not getting all of the SPNs you’d expect: </P> <BR /> <STRONG> dn:CN=HQSCCM2K3TEST,OU=SCCM,OU=Test Infrastructure,OU=Domain Management,DC=contoso,DC=com </STRONG> <BR /> <STRONG> servicePrincipalName: WSMAN/sccm2k3test </STRONG> <BR /> <STRONG> servicePrincipalName: WSMAN/ </STRONG> <BR /> <P> If you look at it closely, you notice all the SPN’s start with characters very much at the end of the alphabet, which also happens to be the end of the index. These entries do not have a prefix like “HOST”. </P> <BR /> <P> <STRONG> </STRONG> </P> <BR /> <P> <STRONG> So how does this happen? </STRONG> </P> <BR /> <P> In the resultant set of LDAP queries, an object may only appear once, but it is possible for an object to be in the index multiple times, because of the way the index is built. Each time the object is found in the index, the LDAP Server has to check the other values of the indexed attribute of the object to see whether it also matches the filter and thus was already added to the result set.&nbsp; The LDAP server is doing its diligence to avoid returning duplicates. </P> <BR /> <P> For example, the first hit in the index for the above workstation example is “ <STRONG> HOST/HERBERTM5 </STRONG> “. </P> <BR /> <P> The second hit “ <STRONG> HOST/ </STRONG> “ kicks off the algorithm. </P> <BR /> <P> The object is read already and the IO and CPU hit has happened. </P> <BR /> <P> Now the query keeps walking the index, and once it arrives at the prefix “ <STRONG> WSMAN </STRONG> ”, the rate of objects it needs to skip approaches 100%. Therefore, it looks at many objects and little additional objects in the result set. </P> <BR /> <P> On the last page of the query, things get even worse. There is an almost 100% rate of duplicates, so the clock of 60 seconds SETSPN allows for the query is ticking, and there are only 8 objects to be found. If the Domain Controller has a slow CPU or the objects need to be read from the disk because of memory pressure, the SETSPN query will probably not finish within a minute for a large forest.&nbsp; This results in the error <STRONG> Ldap Error(0x55 -- Timeout): ldap_get_next_page_s. </STRONG> The larger the index (meaning, the more computers and users you have in your forest), the greater the likelihood that this can occur. </P> <BR /> <P> If you run the query with LDIFDE, LDP or ADFIND you will have a better chance the query will be successful. This is because by default these tools do not specify a time-out and thus use the values of the Domain Controller LDAP Policy. The Domain Controller LDAP policy is 120 seconds (by default)&nbsp;instead of 60 seconds. </P> <BR /> <P> The problem with the results generated by these tools is that you have to correlate the results from the different outputs yourself – the tools won’t do it for you. </P> <BR /> <P> <STRONG> </STRONG> </P> <BR /> <P> <STRONG> So what can you do about it? </STRONG> </P> <BR /> <P> Typically you’ll have to do further troubleshooting, but here are some common causes/resolutions that I’ve seen: </P> <BR /> <OL> <BR /> <LI> If a domain controller is short on memory and encounters many cache misses and thus substantial IO. You can diagnose this using the NTDS performance counters in Performance Monitor.&nbsp; You can add memory to reduce the IO rate and speed things up. </LI> <BR /> <LI> If you are not experiencing memory pressure, the limiting factor could be the “Single-Thread-Performance” of the server. This is important as every LDAP query gets a worker thread and runs no faster than one logical CPU core can manage.&nbsp; If you have a low number of logical cores in a system with a high amount of CPU activity, this can cause the threads to delay long enough for us to see an&nbsp; nconsistent query return.&nbsp; In this situation your best bet is to look for ways to reduce overall processor load on the domain controller – for example, moving other services off of the machine. </LI> <BR /> <LI> There is an update for Windows Server 2012 which helps to avoid the problem: </LI> <BR /> </OL> <BR /> <P> 2799960&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Time-out error when you run SETSPN.exe in Windows 8 or Windows Server 2012 </P> <BR /> <P> <A href="#" target="_blank"> </A> </P> <BR /> <P> The last customer I helped had a combination of issues 1 and 2 and once he chose a beefier DC with more current hardware, the command always succeeded.&nbsp; Another customer had a much bigger environment and ended up using the update I listed above to overcome the issue. </P> <BR /> <P> I hope you have enjoyed this journey explaining what is happening on such a SETSPN query. </P> <BR /> <P> Cheers, </P> <BR /> <P> Herbert "The Thread Master" Mauerer </P> </BODY></HTML> Fri, 05 Apr 2019 02:52:12 GMT TechCommunityAPIAdmin 2019-04-05T02:52:12Z Windows Server 2012 R2 - Preview available for download <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jun 25, 2013 </STRONG> <BR /> Just in case you missed the <A href="#" target="_blank"> announcement </A> , the preview build of Windows Server 2012 R2 is now available for <A href="#" target="_blank"> download </A> .&nbsp; If you want to see the latest and greatest, head on over there and take a gander at the new features.&nbsp; All of us here in support have skin in this game, but Directory Services (us) has several new features that we'll be talking about over the coming months.&nbsp; Including a lot of this stuff named in the announcement: <BR /> <BR /> <EM> <STRONG> "Empowering employee productivity </STRONG> – Windows Server Work Folders, Web App Proxy, improvements to Active Directory Federation Services and other technologies will help companies give their employees consistent access to company resources on the device of their choice." </EM> <BR /> <BR /> Obviously this is still a beta release.&nbsp; Things can change before RTM.&nbsp; Don't go doing anything silly like deploying this in production - it's officially unsupported at this stage, and for testing purposes only.&nbsp; But with all that in mind,&nbsp;give it a whirl,&nbsp;and hit the <A href="#" target="_blank"> TechNet forums </A> to provide feedback&nbsp;and ask questions.&nbsp; You will also want to keep an eye on some of our server and tools blogs in the near future.&nbsp; For your convenience, a bunch of those are linked in the bar up top for you. <BR /> <BR /> Happy previewing! <BR /> <BR /> <P> --David "Town Crier" Beach </P> </BODY></HTML> Fri, 05 Apr 2019 02:51:39 GMT TechCommunityAPIAdmin 2019-04-05T02:51:39Z Two lines that can save your AD from a crisis <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jun 04, 2013 </STRONG> <BR /> <P> <EM> Editor's note:&nbsp; This is the first of very likely many "DS Quickies".&nbsp; "Quickies" are shorter technical blog posts that relate hopefully-useful information and concepts for you to use in administering your networks.&nbsp; We thought about doing these on Twitter or something, but sadly&nbsp;we're still too technical&nbsp;to be bound by a 140-character limit :-) </EM> </P> <BR /> <P> <EM> For those of you who really look forward to the larger articles to help explain different facets of Windows, Active Directory, or troubleshooting, don't worry - there will still be plenty of those too. </EM> </P> <BR /> <P> </P> <BR /> <P> Hi! This is Gonzalo writing to you from the support team for Latin America. </P> <BR /> <P> Recently we got a call from a customer, where one of the administrators accidentally executed a script that was intended to delete local users… on a domain controller. The result was that all domain users were deleted from the environment in just a couple of seconds. The good thing was that this customer had previously enabled Recycle Bin, but it still took a couple of hours to recover all users as this was a very large environment. This type of issue is something that comes up all the time, and it’s always painful for the customers who run into it. I have worked many cases where the lack of&nbsp;proper protection to objects caused a lot of issues for customer environments and even in some cases ended up costing administrators their jobs, all because of an accidental click. But, how can we avoid this? </P> <BR /> <P> If you take a look at the properties of any object in Active Directory, you will notice a checkbox named “Protect object from accidental deletion” under Object tab. When this enabled, permissions are set to deny <BR /> deletion of this object to Everyone. </P> <BR /> <P> <IMG src="" /> <BR /> </P> <BR /> <P> With the exception of Organizational Units, this setting is not enabled by default on all objects in Active Directory.&nbsp; When creating an object, it needs to be set manually. The challenge is how to easily enable this on thousands of objects. </P> <BR /> <P> <STRONG> ANSWER!&nbsp; Powershell! </STRONG> </P> <BR /> <P> Two simple PowerShell commands will enable you to set accidental deletion protection on all objects in your Active Directory. The first command will set this on any users or computers (or any object with value user on the ObjectClass attribute). The second command will set this on any Organizational Unit where the setting is not already enabled. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Get-ADObject -filter {(ObjectClass -eq "user")} | Set-ADObject -ProtectedFromAccidentalDeletion:$true </STRONG> </P> <BR /> <P> <STRONG> Get-ADOrganizationalUnit -filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true </STRONG> </P> <BR /> <P> </P> <BR /> <P> Once you run these commands, your environment will be protected against accidental (or intentional) deletion of objects. </P> <BR /> <P> Note: As a proof of concept, I tested the script that my customer used with the accidental deletion protection enabled and none of the objects in my Active Directory environment were deleted. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Gonzalo “keep your job” Reyna </STRONG> </P> </BODY></HTML> Fri, 05 Apr 2019 02:51:32 GMT TechCommunityAPIAdmin 2019-04-05T02:51:32Z Back to the Loopback: Troubleshooting Group Policy loopback processing, Part 2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on May 21, 2013 </STRONG> <BR /> <P> Welcome back! <A href="#" target="_blank"> Kim Nichols </A> here once again with the much anticipated Part 2 to <A href="#" target="_blank"> Circle Back to Loopback </A> .&nbsp; Thanks for all the comments and feedback on Part 1.&nbsp; For those of you joining us a little late in the game, you'll want to check out <A href="#" target="_blank"> Part 1: Circle Back to Loopback </A> before reading further. </P> <BR /> <P> In my first post, the goal was to keep it simple.&nbsp; Now, we're going to go into a little more detail to help you identify and troubleshoot Group Policy issues related to loopback processing.&nbsp; If you follow these steps, you should be able to apply what you've learned to any loopback scenario that you may run into (assuming that the environment is healthy and there are no other policy infrastructure issues). </P> <BR /> <P> To troubleshoot loopback processing you need to know and understand: </P> <BR /> <OL> <BR /> <LI> The status of the loopback configuration.&nbsp; Is it enabled, and if so, in which mode? </LI> <BR /> <LI> The desired state configuration vs. the actual state configuration of applied policy </LI> <BR /> <LI> Which settings from which GPOs are "supposed" to be applied? </LI> <BR /> <LI> To whom should the settings apply or not apply? </LI> <BR /> <OL> <BR /> <LI> The security filtering requirements when using loopback </LI> <BR /> <LI> Is the loopback setting configured in the same GPO or a separate GPO from the user settings? </LI> <BR /> <LI> Are the user settings configured in a GPO with computer settings? </LI> <BR /> </OL> </OL> <BR /> <H2> What you need to know: </H2> <BR /> <P> <STRONG> Know if loopback is enabled and in which mode </STRONG> </P> <BR /> <P> The first step in troubleshooting loopback is to know that it is enabled.&nbsp; It seems pretty obvious, I know, but often loopback is enabled by one administrator in one GPO without understanding that the setting will impact all computers that apply the GPO.&nbsp; This gets back to <A href="#" target="_blank"> Part 1 </A> of this blog . . . loopback processing is a <EM> computer </EM> configuration setting. </P> <BR /> <P> Take a deep cleansing breath and say it again . . . Loopback processing is a <EM> computer </EM> configuration setting.&nbsp; :-) </P> <BR /> <P> Everyone feels better now, right?&nbsp; The loopback setting configures a registry value on the computer to which it applies.&nbsp; The Group Policy engine reads this value and changes how it builds the list of applicable user policies based on the selected loopback mode. </P> <BR /> <P> The easiest way to know if loopback might be causing troubles with your policy processing is to collect a <STRONG> GPResult /h </STRONG> from the computer. <STRONG> Since loopback is a computer configuration setting, you will need to run GPResult from an administrative command prompt. </STRONG> </P> <BR /> <P> </P> <BR /> <P> <STRONG> <IMG src="" /> </STRONG> </P> <BR /> <P> </P> <BR /> <P> The good news is that the <STRONG> GPResult </STRONG> output will show you the winning GPO with loopback enabled.&nbsp; Unfortunately, it does not list all GPOs with loopback configured, just the one with the highest precedence. </P> <BR /> <P> If your OU structure separates users from computers, the <STRONG> GPResult </STRONG> output can also help you find GPOs containing user settings that are linked to computer OUs.&nbsp; Look for GPOs linked to computer OUs under the <STRONG> Applied GPOs </STRONG> section of the <STRONG> User Details </STRONG> of the <STRONG> GPResult </STRONG> output. </P> <BR /> <P> Below is an example of the output of the <STRONG> GPResult /h </STRONG> command from a Windows Server 2012 member server.&nbsp; The layout of the report has changed slightly going from Windows Server 2008 to Windows Server 2012, so your results may look different, but the same information is provided by previous versions of the tool.&nbsp; Notice that the link location includes the Computers OU, but we are in the User Details section of the report.&nbsp; This is a good indication that we have loopback enabled in a GPO linked in the path of the computer account. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <STRONG> <BR /> </STRONG> <STRONG> Understand the desired state vs. the actual state </STRONG> </P> <BR /> <P> This one also sounds obvious, but in order to troubleshoot you have to know and understand exactly which settings you are expecting to apply to the user.&nbsp; This is harder than it sounds.&nbsp; In a lab environment where you control everything, it's pretty easy to keep track of desired configuration.&nbsp; However, in a production environment with potentially multiple delegated GPO admins, this is much more difficult. </P> <BR /> <P> <STRONG> GPResult </STRONG> gives us the actual state, but if you don't know the desired state at the setting level, then you can't reasonably determine if loopback is configured correctly (meaning you have WMI filters and/or security filtering set properly to achieve your desired configuration). </P> <BR /> <P> <STRONG> <BR /> </STRONG> <STRONG> Review security filtering on GPOs <BR /> <BR /> </STRONG> Once you determine which GPOs or which settings are not applying as expected, then you have a place to start your investigation. </P> <BR /> <P> In&nbsp;our experience here in support, <EM> loopback processing issues usually come down to incorrect security filtering </EM> <STRONG> , </STRONG> so rule that out first. </P> <BR /> <P> This is where things get tricky . . . If you are configuring custom security filtering on your GPOs, loopback can get confusing quickly.&nbsp; As a general rule, you should try to keep your WMI and security filtering as simple as possible&nbsp;- but&nbsp;ESPECIALLY when loopback is involved.&nbsp; You may want to consider temporarily unlinking any WMI filters for troubleshooting purposes.&nbsp; The goal is to ensure the policies you are expecting to apply are actually applying.&nbsp; Once you determine this, then you can add your WMI filters back into the equation.&nbsp; A test environment is the best place to do this type of investigation. </P> <BR /> <P> Setting up security filtering correctly depends on how you architect your policies: </P> <BR /> <OL> <BR /> <LI> Did you enable loopback in its own GPO or in a GPO with other computer or user settings? </LI> <BR /> <LI> Are you combining user settings and computer settings into the same GPO(s) linked to the computer’sOU? </LI> <BR /> </OL> <BR /> <P> The thing to keep in mind is that if you have what I would call "mixed use" GPOs, then your security filtering has to accommodate <EM> <STRONG> all </STRONG> </EM> of those uses.&nbsp; This is only a problem if you remove Authenticated Users from the security filter on the GPO containing the user settings.&nbsp; If you remove Authenticated Users from the security filter, then you have to think through which settings you are configuring, in which GPOs, to be applied to which computers and users, in which loopback mode.... </P> <BR /> <P> Ouch.&nbsp; That's LOTS of thinking! </P> <BR /> <P> So, unless that sounds like loads of fun to you, it’s best to keep WMI and security filtering as simple as possible.&nbsp; I know that you can’t always leave Authenticated Users in place, but try to think of alternative solutions before removing it when loopback is involved. </P> <BR /> <P> Now to the part that everyone&nbsp;always asks&nbsp;about once they realize their current filter is wrong&nbsp;– How the heck <EM> should </EM> I configure the security filter?! </P> <BR /> <P> </P> <BR /> <P> <STRONG> Security filtering requirements: </STRONG> </P> <BR /> <OL> <BR /> <LI> The computer account must have <STRONG> READ </STRONG> <STRONG> </STRONG> and <STRONG> </STRONG> <STRONG> APPLY </STRONG> permissions to the GPO that contains the loopback configuration setting. </LI> <BR /> <LI> If you are configuring user settings in the same GPO as computer settings, then the user and computer accounts will both need <STRONG> READ </STRONG> <STRONG> </STRONG> and <STRONG> </STRONG> <STRONG> APPLY </STRONG> permissions to the GPO since there are portions of the GPO that are applicable to both. </LI> <BR /> <LI> If the user settings are in a separate GPO from the loopback configuration setting (#1 above) and any other computer settings (#2 above), then the GPO containing the user settings requires the following permissions: </LI> <BR /> </OL> <BR /> <P> </P> <BR /> <P> <STRONG> Merge mode requirements (Vista+): </STRONG> </P> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> User account: </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> READ </STRONG> and <STRONG> APPLY </STRONG> (these are the default <BR /> permissions that are applied when you add users to the Security Filtering <BR /> section of the GPO&nbsp; on the Scope tab in <BR /> GPMC) </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Computer account: </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Minimum of <STRONG> READ </STRONG> permission </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <P> </P> <BR /> <P> <STRONG> Replace mode requirements: </STRONG> </P> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> User account: </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> READ </STRONG> and <STRONG> APPLY </STRONG> (these are the default <BR /> permissions that are applied when you add users to the Security Filtering <BR /> section of the GPO&nbsp; on the Scope tab in <BR /> GPMC) </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Computer account: </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> No permissions are required </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> <P> </P> <BR /> <P> </P> <BR /> <H2> Tools for Troubleshooting </H2> <BR /> <P> The number one tool for troubleshooting loopback processing is your <STRONG> GPRESULT </STRONG> output and a solid understanding of the security filtering requirements for loopback processing in your GPO architecture (see above). </P> <BR /> <P> The <STRONG> GPRESULT </STRONG> will tell you which GPOs applied to the user.&nbsp; If a specific GPO failed to apply, then you need to review the security filtering on that GPO and verify: </P> <BR /> <UL> <BR /> <LI> The user has <STRONG> READ </STRONG> and <STRONG> APPLY </STRONG> permissions </LI> <BR /> <LI> Depending on your GPO architecture, the computer may need <STRONG> READ </STRONG> or it may need <STRONG> READ </STRONG> and <STRONG> APPLY </STRONG> if you combined computer and user settings in the same GPO. </LI> <BR /> </UL> <BR /> <P> The same strategy applies if you have mysterious policy settings applying after configuring loopback and you are not sure know why.&nbsp; Use your <STRONG> GPRESULT </STRONG> output to identify which GPO(s) the policy settings are coming from and then review the security filtering of those GPOs. </P> <BR /> <P> The <A href="#" target="_blank"> Group Policy Operational logs </A> from the computer will also tell you which GPOs were discovered and applied, but this is the same information that you will get <BR /> from the <STRONG> GPRESULT </STRONG> . </P> <BR /> <H2> Recommendations for using loopback </H2> <BR /> <P> After working my fair share of loopback-related cases, I've collected a list of recommendations for using loopback.&nbsp; This isn’t an official list of "best practices", but rather just some personal recommendations that may make your life easier.&nbsp; ENJOY! </P> <BR /> <P> I'll start with what is fast becoming my mantra: <STRONG> Keep it Simple. </STRONG> Pretty much all of my recommendations can come back to this point. </P> <BR /> <P> </P> <BR /> <P> 1. Don't use loopback&nbsp; :-) </P> <BR /> <P> OK, I know, not realistic.&nbsp; How about this . . . Don't use loopback unless you absolutely have to. </P> <BR /> <UL> <BR /> <LI> I say this not because there is something evil about loopback, but rather because loopback complicates how you think about Group Policy processing.&nbsp; Loopback tends to be configured and then forgotten about until you start seeing unexpected results. </LI> <BR /> </UL> <BR /> <P> 2. Use a separate GPO for the loopback setting; ONLY include the loopback setting in this GPO, and do not include the user settings.&nbsp; Name it Loopback-Merge&nbsp;or Loopback-Replace depending on the mode. </P> <BR /> <UL> <BR /> <LI> This makes loopback very easy to identify in both the GPMC and in your <STRONG> GPRESULT </STRONG> output.&nbsp; In the GPMC, you will be able to see where the GPO is linked and the mode without needing to view the settings or details of any GPOS.&nbsp; Your <STRONG> GPRESULT </STRONG> output will clearly list the loopback policy in the list of applied policies and you will also know the loopback mode, without digging into the report. Using a separate policy also allows you to manage the security of the loopback GPO separately from the security on the GPOs containing the user settings. </LI> <BR /> </UL> <BR /> <P> 3. Avoid custom security filtering if you can help it. </P> <BR /> <UL> <BR /> <LI> Loopback works without a hitch if you leave Authenticated Users in the security filtering of the GPO.&nbsp; Removing Authenticated Users results in a lot more work for you in the long run and makes troubleshooting undesired behaviors much more complicated. </LI> <BR /> </UL> <BR /> <P> 4. Don't enable loopback in a GPO linked at the domain level! </P> <BR /> <UL> <BR /> <LI> This will impact your Domain Controllers.&nbsp; I wouldn't be including this warning, if I hadn't worked several cases where loopback had been inadvertently applied to Domain Controllers.&nbsp; Again, there isn’t anything inherently wrong with applying loopback on Domain Controllers.&nbsp; It is bad, however, when loopback unexpectedly applies to Domain Controllers. </LI> <BR /> <LI> If you absolutely MUST enable loopback in a GPO linked at the domain level, then block inheritance on your Domain Controllers OU.&nbsp; If you do this, you will need to link the Default Domain Policy back to the Domain Controllers OU making sure to have the precedence of the Default Domain Controllers policy higher (lower number) than the Domain Policy. </LI> <BR /> <LI> In general, be careful with all policies linked at the at the domain level.&nbsp; Yes, it may be "simpler" to manage most policy at the domain level, but it can lead <BR /> to lazy administration practices and make it very easy to forget about the impact of seemingly minor policy changes on your DCs. </LI> <BR /> <LI> Even if you are editing the security filtering to specific computers, it is still dangerous to have the loopback setting in a GPO linked at the domain level.&nbsp; What if someone mistakenly modifies the security filtering to "fix" some other issue. </LI> <BR /> <UL> <BR /> <LI> <STRONG> TEST, TEST, TEST!!! </STRONG> It’s even more important to test when you&nbsp;are modifying GPOs that impact domain controllers.&nbsp; Making a change at the domain level&nbsp;that negatively impacts a domain controller can be career altering.&nbsp; Even if you have to set up a test&nbsp;domain in virtual machines on your own workstation, find a way to test. </LI> <BR /> </UL> <BR /> </UL> <BR /> <P> 5. Always test in a representative environment prior to deploying loopback in production. </P> <BR /> <UL> <BR /> <LI> Try to duplicate your production GPOs as closely as possible.&nbsp; Export/Import is a great way to do this. </LI> <BR /> <LI> Enabling loopback almost always surfaces some settings that you weren't aware of.&nbsp; Unless you are diligent about disabling unused portions of GPOs and you perform periodic audits of actual configuration versus documented desired state configuration, there will typically be a few settings that are outside of your desired configuration. </LI> <BR /> <LI> Duplicating your production policies in a test environment means you will find these anomalies before you make the changes in production. </LI> <BR /> </UL> <BR /> <P> </P> <BR /> <P> That’s all folks!&nbsp; You are now ready to go forth and conquer all of those loopback policies! </P> <BR /> <P> </P> <BR /> <P> <STRONG> Kim <EM> “1.21 Gigawatts!!” </EM> Nichols </STRONG> </P> </BODY></HTML> Fri, 05 Apr 2019 02:51:16 GMT TechCommunityAPIAdmin 2019-04-05T02:51:16Z We're back. Did you miss us? <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on May 17, 2013 </STRONG> <BR /> Hey all, <A href="#" target="_blank"> David </A> here.&nbsp; Now that we’ve <A href="#" target="_blank"> broken the silence </A> , we here on the DS team felt that we owed you, dear readers, an explanation of some sort.&nbsp; Plus, we wanted to talk about the blog itself, some changes happening for us, and what you should hopefully be able to expect moving forward. <BR /> <BR /> <BR /> <BR /> <STRONG> So, what had happened was…. </STRONG> <BR /> <BR /> As most of you know, a few months ago our <A href="#" target="_blank"> editor-in-chief </A> and the butt of many jokes here on the DS support team <A href="#" target="_blank"> moved to a new position </A> .&nbsp; We have it on good authority that he is thoroughly terrorizing many of our developers in Redmond with scary words like “documentation”, “supportability”, and other Chicago-style aphorisms which are best not repeated in print. <BR /> <BR /> Unfortunately for us and for this blog, that left us with a little bit of a hole in the editing team!&nbsp; The folks left behind might have been superheroes, but the problem with being a superhero is that you get called on to go save the world (or a customer with a crisis) all the time, and that doesn’t leave much time for picking up your cape from the dry cleaners, let alone keeping up with editing blog submissions, doing mail sacks, and generally keeping the blog going. <BR /> <BR /> At the same time, we had a bit of a reorganization internally.&nbsp; Where we were formerly one team within support, we are now two teams – DS (Directory Services) and ID (Identity).&nbsp; Why the distinction?&nbsp; Well, you may have heard about this Cloud thing…. But that’s a story best told by technical blog posts, really.&nbsp; For now, let’s just say the scope of some of what we do expanded last year from “a lot of people use it” to “internet scale”.&nbsp; Pretty scary when you think about it. <BR /> <BR /> Just to make things even more confusing, about a month ago we were officially reunited with our long-lost (and slightly insane, but in a good way) brethren in the <A href="#" target="_blank"> field engineering organization </A> .&nbsp; That’s right, our two orgs have been glommed <A href="#" title="" target="_blank"> [1] </A> together into one giant concentration of support engineering superpower.&nbsp; While it’s opening up some really cool stuff that we have always wanted to do but couldn’t before, it’s still the equivalent of waking up one day and finding out that all of those cousins you see every few years at family reunions are coming to live with you.&nbsp; In your house.&nbsp; Oh, and they’re bringing their dog. <BR /> <BR /> Either way, the net effect of all this massive change was that we sort of got quiet for a few months.&nbsp; It wasn’t you, honest.&nbsp; It was us. <BR /> <BR /> <BR /> <BR /> <STRONG> What to Expect </STRONG> <BR /> <BR /> It’s important to us that we keep this blog current with detailed, pertinent technical info that helps you resolve issues that you might encounter, or even just helps you understand how our parts of Windows work a bit better.&nbsp; So, we’re picking that torch back up and we’ll be trying to get several good technical posts up each month for you.&nbsp; You may also see some shorter posts moving forward.&nbsp; The idea is to break up the giant articles and try to get some smaller, useful-to-know things out there every so often. &nbsp;Internally, we’re calling the little posts “DS Quickies” but no promises on whether we’ll actually give you that as a category to&nbsp;search on.&nbsp; Yes, we’re cruel like that.&nbsp; You’ll also see the return of the mail sack at some point in the near future, and most importantly you’re going to see some new names showing up as writers.&nbsp; We’ve put out the call, and we’re planning to bring you blog posts written not just by our folks in the Americas, but also in Europe and Asia.&nbsp; You can probably also expect some guest posts from our kin in the PFE organization, when they have something specific to what we do that they want to talk about. <BR /> <BR /> At the same time, we’re keen on keeping the stuff that makes our blog useful and fun.&nbsp; So you can continue to expect technical depth, detailed analysis, plain-English explanations, and occasional irreverent, snarky humor.&nbsp; We’re not here to tell you why you should buy Windows clients or servers (or phones, or tablets) – we have plenty of marketing websites that do that better than we ever could.&nbsp; Instead, we’re here to help you understand how Windows works and how to fix problems when they occur.&nbsp; Although we do reserve the right to post blatant wackiness or fun things every so often too.&nbsp; Look, we don’t get out much, ok?&nbsp; This is our outlet.&nbsp; Just go with us on this. <BR /> <BR /> Finally, you’re going to see me personally posting a bit more, since I’ve taken over as the primary editor for the site.&nbsp; I know - I tried to warn them what would happen, but they still gave me the job all the same.&nbsp; Jokes aside, I feel like it’s important that our blog isn’t just an encyclopedia of awesome technical troubleshooting, but also that it showcases the fact that we’re real people doing our best to make the IT world a better place, as sappy as that sounds. (Except for <A href="#" target="_blank"> David Fisher </A> – I’m convinced he’s really a robot).&nbsp; I have a different writing style than Ned and Jonathan, and a different sense of humor, but I promise to contain myself as much as possible.&nbsp; :-) <BR /> <BR /> Sound good?&nbsp; We hope so.&nbsp; We’re going to go off and write some more technical stuff now – in fact:&nbsp; On deck for next week:&nbsp; A followup to <A href="#" target="_blank"> Kim’s </A> blog on <A href="#" target="_blank"> Loopback Policy Processing </A> . <BR /> <BR /> <P> We wanted to leave you with a funny video that’s safe for work to help kick off the weekend, but alas our bing fu was weak today.&nbsp; Got a good one to share?&nbsp; Feel free to link it for us in the comments! </P> <DIV> <DIV> <P> <A href="#" title="" target="_blank"> [1] </A> <BR /> “Glom” is a technical term, by the way, not a managerial one.&nbsp; Needless to say, hijinks are continuing to <BR /> ensue. </P> <P> </P> <P> -- David "Capes are cool" Beach </P> </DIV> </DIV> </BODY></HTML> Fri, 05 Apr 2019 02:50:51 GMT TechCommunityAPIAdmin 2019-04-05T02:50:51Z AD FS 2.0 Claims Rule Language Part 2 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on May 07, 2013 </STRONG> <BR /> <P> Hello, <A href="#" target="_blank"> Joji Oshima </A> here to dive deeper into the Claims Rule Language for AD FS. A while back I wrote a <A href="#" target="_blank"> getting started post </A> on the claims rule language in AD FS 2.0. If you haven't seen it, I would start with that article first as I'm going to build on the claims rule language syntax discussed in that earlier post. In this post, I'm going to cover more complex claim rules using Regular Expressions ( <EM> RegEx </EM> ) and how to use them to solve real world issues. </P> <BR /> <H2> An Introduction to Regex </H2> <BR /> <P> The use of RegEx allows us to search or manipulate data in many ways in order to get a desired result. Without RegEx, when we do comparisons or replacements we must look for an exact match. Most of the time this is sufficient but what if you need to search or replace based on a pattern? Say you want to search for strings that simply start with a particular word. RegEx uses pattern matching to look at a string with more precision. We can use this to control which claims are passed through, and even manipulate the data inside the claims. </P> <BR /> <H2> Using RegEx in searches </H2> <BR /> <P> Using RegEx to pattern match is accomplished by changing the standard double equals "==" to "=~" and by using special metacharacters in the condition statement. I'll outline the more commonly used ones, but there are <A href="#" target="_blank"> good resources </A> available online that go into more detail. For those of you unfamiliar with RegEx, let's first look at some common RegEx metacharacters used to build pattern templates and what the result would be when using them. </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Symbol </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> Operation </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> Example rule </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> ^ </P> <BR /> </TD> <TD> <BR /> <P> Match the beginning of a string </P> <BR /> </TD> <TD> <BR /> <P> <EM> c:[type == "<A href="#" target="_blank"></A>", Value =~ "^director"] </EM> </P> <BR /> <P> <EM> =&gt; issue (claim = c); </EM> </P> <BR /> <P> </P> <BR /> <P> Pass through any role claims that start with "director" </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> $ </P> <BR /> </TD> <TD> <BR /> <P> Match the end of a string </P> <BR /> </TD> <TD> <BR /> <P> <EM> c:[type == "<A href="#" target="_blank"></A>", Value =~ "$"] </EM> </P> <BR /> <P> <EM> =&gt; issue (claim = c); </EM> </P> <BR /> <P> </P> <BR /> <P> Pass through any email claims that end with "" </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> | </P> <BR /> </TD> <TD> <BR /> <P> OR </P> <BR /> </TD> <TD> <BR /> <P> <EM> c:[type == "<A href="#" target="_blank"></A>", Value =~ "^director|^manager"] </EM> </P> <BR /> <P> <EM> =&gt; issue (claim = c); </EM> </P> <BR /> <P> </P> <BR /> <P> Pass through any role claims that start with "director" or "manager" </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> (?i) </P> <BR /> </TD> <TD> <BR /> <P> Not case sensitive </P> <BR /> </TD> <TD> <BR /> <P> <EM> c:[type == "<A href="#" target="_blank"></A>", Value =~ "(?i)^director"] </EM> </P> <BR /> <P> <EM> =&gt; issue (claim = c); </EM> </P> <BR /> <P> </P> <BR /> <P> Pass through any role claims that start with "director" regardless of case </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> x.*y </P> <BR /> </TD> <TD> <BR /> <P> "x" followed by "y" </P> <BR /> </TD> <TD> <BR /> <P> <EM> c:[type == "<A href="#" target="_blank"></A>", Value =~ "(?i)Seattle.*Manager"] </EM> </P> <BR /> <P> <EM> =&gt; issue (claim = c); </EM> </P> <BR /> <P> </P> <BR /> <P> Pass through any role claims that contain "Seattle" followed by "Manager" regardless of case. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> + </P> <BR /> </TD> <TD> <BR /> <P> Match preceding character </P> <BR /> </TD> <TD> <BR /> <P> <EM> c:[type == "<A href="#" target="_blank"></A>", Value =~ "^0+"] </EM> </P> <BR /> <P> <EM> =&gt; issue (claim = c); </EM> </P> <BR /> <P> </P> <BR /> <P> Pass through any employeeId claims that contain start with at least one "0" </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> * </P> <BR /> </TD> <TD> <BR /> <P> Match preceding character zero or more times </P> <BR /> </TD> <TD> <BR /> <P> Similar to above, more useful in RegExReplace() scenarios. </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <H2> Using RegEx in string manipulation </H2> <BR /> <P> RegEx pattern matching can also be used in replacement scenarios. It is similar to a "find and replace", but using pattern matching instead of exact values. To use this in a claim rule, we use the RegExReplace() function in the value section of the issuance statement. </P> <BR /> <P> The RegExReplace() function accepts three parameters. </P> <BR /> <OL> <BR /> <LI> <BR /> <DIV> The first is the string in which we are searching. </DIV> <BR /> <OL> <BR /> <LI> We will typically want to search the value of the incoming claim (c.Value), but this could be a combination of values (c1.Value + c2.Value). </LI> <BR /> </OL> </LI> <BR /> <LI> The second is the RegEx pattern we are searching for in the first parameter. </LI> <BR /> <LI> The third is the string value that will replace any matches found. </LI> <BR /> </OL> <BR /> <P> Example: </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <EM> c:[type == "<A href="#" target="_blank"></A>"] <BR /> =&gt; issue (Type = "<A href="#" target="_blank"></A>", Value = RegExReplace(c.Value, "(?i)director", "Manager"); </EM> </P> <BR /> <P> </P> <BR /> <P> Pass through any role claims. If any of the claims contain the word "Director", RegExReplace() will change it to "Manager". For example, "Director of Finance" would pass through as "Manager of Finance". </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <H2> Real World Examples </H2> <BR /> <P> Let's look at some real world examples of regular expressions in claims rules. </P> <BR /> <H2> Problem 1: </H2> <BR /> <P> We want to add claims for all group memberships, including distribution groups. </P> <BR /> <H2> Solution: </H2> <BR /> <P> Typically, group membership is added using the wizard and selecting Token-Groups Unqualified Names and map it to the Group or Role claim. This will only pull security groups, not distribution groups, and will not contain Domain Local groups. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> We can pull from memberOf, but that will give us the entire distinguished name, which is not what we want. One way to solve this problem is to use three separate claim rules and use RegExReplace() to remove unwanted data. </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 1: Pull memberOf, add to working set "phase 1" </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c:[Type == "<A href="#" target="_blank"></A>", Issuer == "AD AUTHORITY"] <BR /> =&gt; add(store = "Active Directory", types = ("<A href="#" target="_blank"></A>"), query = ";memberOf;{0}", param = c.Value); </EM> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Example: </STRONG> "CN=Group1,OU=Users,DC=contoso,DC=com" is put into a phase 1 claim. </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 2: Drop everything after the first comma, add to working set "phase 2" </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c:[Type == "<A href="#" target="_blank"></A>"] <BR /> =&gt; add(Type = "<A href="#" target="_blank"></A>", Value = RegExReplace(c.Value, ",[^\n]*", "")); </EM> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Example: </STRONG> We process the value in the phase 1 claim and put "CN=Group1" into a phase 2 claim. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <STRONG> Digging Deeper: </STRONG> RegExReplace(c.Value, ",[^\n]*", "") </P> <BR /> <UL> <BR /> <LI> <STRONG> c.Value </STRONG> is the value of the phase 1 claim. This is what we are searching in. </LI> <BR /> <LI> <STRONG> ",[^\n]*" </STRONG> is the RegEx syntax used to find the first comma, plus everything after it </LI> <BR /> <LI> <STRONG> "" </STRONG> is the replacement value. Since there is no string, it effectively removes any matches. </LI> <BR /> </UL> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 3: Drop CN= at the beginning, add to outgoing claim set as the standard role claim </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c:[Type == "<A href="#" target="_blank"></A>"] </EM> </P> <BR /> <P> <EM> =&gt; issue(Type = "<A href="#" target="_blank"></A>", Value = RegExReplace(c.Value, "^CN=", "")); </EM> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Example: </STRONG> We process the value in phase 2 claim and put "Group1" into the role claim </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Digging Deeper: </STRONG> RegExReplace(c.Value, "^CN=", "") </P> <BR /> <UL> <BR /> <LI> <STRONG> c.Value </STRONG> is the value of the phase 1 claim. This is what we are searching in. </LI> <BR /> <LI> <STRONG> "^CN=" </STRONG> is the RegEx syntax used to find "CN=" at the beginning of the string. </LI> <BR /> <LI> <STRONG> "" </STRONG> is the replacement value. Since there is no string, it effectively removes any matches. </LI> <BR /> </UL> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <H2> Problem 2: </H2> <BR /> <P> We need to compare the values in two different claims and only allow access to the relying party if they match. </P> <BR /> <H2> Solution: </H2> <BR /> <P> In this case we can use RegExReplace(). This is not the typical use of this function, but it works in this scenario. The function will attempt to match the pattern in the first data set with the second data set. If they match, it will issue a new claim with the value of "Yes". This new claim can then be used to grant access to the relying party. That way, if these values do not match, the user will not have this claim with the value of "Yes". </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c1:[Type == "<A href="#" target="_blank"></A>"] &amp;&amp; </EM> </P> <BR /> <P> <EM> c2:[Type == "<A href="#" target="_blank"></A>"] </EM> </P> <BR /> <P> <EM> =&gt; issue(Type = "<A href="#" target="_blank"></A>", Value = RegExReplace(c1.Value, c2.Value, "Yes")); </EM> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <STRONG> Example: </STRONG> If there is a data1 claim with the value of "contoso" and a data2 claim with a value of "contoso", it will issue a UserAuthorized claim with the value of "Yes". However, if data1 is "adatum" and data2 is "fabrikam", it will issue a UserAuthorized claim with the value of "adatum". </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <STRONG> Digging Deeper: </STRONG> RegExReplace(c1.Value, c2.Value, "Yes") </P> <BR /> <UL> <BR /> <LI> <STRONG> c1.Value </STRONG> is the value of the data1 claim. This is what we are searching in. </LI> <BR /> <LI> <STRONG> c2.Value </STRONG> is the value of the data2 claim. This is what we are searching for. </LI> <BR /> <LI> <STRONG> "Yes" </STRONG> is the replacement value. Only if c1.Value &amp; c2.Value match will there be a pattern match and the string will be replaced with "Yes". Otherwise the claim will be issued with the value of the data1 claim. </LI> <BR /> </UL> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <H2> Problem 3: </H2> <BR /> <P> Let's take a second look at potential issue with our solution to problem 2. Since we are using the value of one of the claims as the RegEx syntax, we must be careful to check for certain RegEx metacharacters that would make the comparison mean something different. The backslash is used in some RegEx metacharacters so any backslashes in the values will throw off the comparison and it will always fail, even if the values match. </P> <BR /> <H2> Solution: </H2> <BR /> <P> In order to ensure that our matching claim rule works, we must sanitize the input values by removing any backslashes before doing the comparison. We can do this by taking the data that would go into the initial claims, put it in a holding attribute, and then use RegEx to strip out the backslash. The example below only shows the sanitization of data1, but it would be similar for data2. </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 1: Pull attribute1, add to holding attribute "<A href="#" target="_blank"></A>" </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c:[Type == "<A href="#" target="_blank"></A>", Issuer == "AD AUTHORITY"] </EM> </P> <BR /> <P> <EM> =&gt; add(store = "Active Directory", types = ("<A href="#" target="_blank"></A>"), query = ";attribute1;{0}", param = c.Value); </EM> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Example: </STRONG> The value in attribute 1 is "Contoso\John" which is placed in the data1holder claim. </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 2: Strip the backslash from the holding claim and issue the new data1 claim </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c:[Type == "<A href="#" target="_blank"></A>", Issuer == "AD AUTHORITY"] </EM> </P> <BR /> <P> <EM> =&gt; issue(type = "<A href="#" target="_blank"></A>", Value = RegExReplace(c.Value,"\\",""); </EM> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Example: </STRONG> We process the value in the data1holder claim and put "ContosoJohn" in a data1 claim </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Digging Deeper: </STRONG> RegExReplace(c.Value,"\\","") </P> <BR /> <UL> <BR /> <LI> <STRONG> c.Value </STRONG> is the value of the data1 claim. This is what we are searching in. </LI> <BR /> <LI> <STRONG> "\\" </STRONG> is considered a single backslash. In RegEx, using a backslash in front of a character makes it a literal backslash. </LI> <BR /> <LI> <STRONG> "" </STRONG> is the replacement value. Since there is no string, it effectively removes any matches. </LI> <BR /> </UL> <BR /> <P> </P> <BR /> <P> An alternate solution would be to pad each backslash in the data2 value with a second backslash. That way each backslash would be represented as a literal backslash. We could accomplish this by using RegExReplace(c.Value,"\\","\\") against a data2 input value. </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <H2> Problem 4: </H2> <BR /> <P> Employee numbers vary in length, but we need to have exactly 9 characters in the claim value. Employee numbers that are shorter than 9 characters should be padded in the front with leading zeros. </P> <BR /> <H2> Solution: </H2> <BR /> <P> In this case we can create a buffer claim, join that with the employee number claim, and then use RegEx to use the right most 9 characters of the combined string. </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 1: Create a buffer claim to create the zero-padding </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> =&gt; add(Type = "Buffer", Value = "000000000"); </EM> </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 2: Pull the employeeNumber attribute from Active Directory, place it in a holding claim </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c:[Type == "<A href="#" target="_blank"></A>", Issuer == "AD AUTHORITY"] </EM> </P> <BR /> <P> <EM> =&gt; add(store = "Active Directory", types = ("ENHolder"), query = ";employeeNumber;{0}", param = c.Value); </EM> </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 3: Combine the two values, then use RegEx to remove all but the 9 right most characters. </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c1:[Type == "Buffer"] </EM> </P> <BR /> <P> <EM> &amp;&amp; c2:[Type == "ENHolder"] </EM> </P> <BR /> <P> <EM> =&gt; issue(Type = "<A href="#" target="_blank"></A>", Value = RegExReplace(c1.Value + c2.Value, ".*(?=.{9}$)", "")); </EM> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Digging Deeper: </STRONG> RegExReplace(c1.Value + c2.Value, ".*(?=.{9}$)", "") </P> <BR /> <UL> <BR /> <LI> <STRONG> c1.Value + c2.Value </STRONG> is the employee number padded with nine zeros. This is what we are searching in. </LI> <BR /> <LI> <STRONG> ".*(?=.{9}$)" </STRONG> represents the last nine characters of a string. This is what we are searching for. We could replace the 9 with any number and have it represent the last "X" number of characters. </LI> <BR /> <LI> <STRONG> "" </STRONG> is the replacement value. Since there is no string, it effectively removes any matches. </LI> <BR /> </UL> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <H2> Problem 5: </H2> <BR /> <P> Employee numbers contain leading zeros but we need to remove those before sending them to the relying party. </P> <BR /> <H2> Solution: </H2> <BR /> <P> In this case we can pull employee number from Active Directory and place it in a holding claim, then use RegEx to use the strip out any leading zeros. </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 1: Pull the employeeNumber attribute from Active Directory, place it in a holding claim </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c:[Type == "<A href="#" target="_blank"></A>", Issuer == "AD AUTHORITY"] </EM> </P> <BR /> <P> <EM> =&gt; add(store = "Active Directory", types = ("ENHolder"), query = ";employeeNumber;{0}", param = c.Value); </EM> </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Phase 2: Take the value in ENHolder and remove any leading zeros. </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> </P> <BR /> <P> <EM> c:[Type == "ENHolder"] </EM> </P> <BR /> <P> <EM> =&gt; issue(Type = "<A href="#" target="_blank"></A>", Value = RegExReplace(c.Value, "^0*", "")); </EM> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Digging Deeper: </STRONG> RegExReplace(c.Value, "^0*", "") </P> <BR /> <UL> <BR /> <LI> <STRONG> c1.Value </STRONG> is the employee number. This is what we are searching in. </LI> <BR /> <LI> <STRONG> "^0*" </STRONG> finds any leading zeros. This is what we are searching for. If we only had ^0 it would only match a single leading zero. If we had 0* it would find any zeros in the string. </LI> <BR /> <LI> <STRONG> "" </STRONG> is the replacement value. Since there is no string, it effectively removes any matches. </LI> <BR /> </UL> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <H2> Conclusion </H2> <BR /> <P> As you can see, RegEx adds powerful functionality to the claims rule language. It has a high initial learning curve, but once you master it you will find that there are few scenarios that RegEx can't solve. I would highly recommend <A href="#" target="_blank"> searching </A> for an online RegEx syntax tester as it will make learning and testing much easier. I'll continue to expand the TechNet wiki article so I would check there for more details on the claims rule language. </P> <BR /> <P> <A href="#" target="_blank"> Understanding Claim Rule Language in AD FS 2.0 </A> </P> <BR /> <P> <A href="#" target="_blank"> AD FS 2.0: Using RegEx in the Claims Rule Language </A> </P> <BR /> <P> <A href="#" target="_blank"> Regular Expression Syntax </A> </P> <BR /> <P> <A href="#" target="_blank"> AD FS 2.0 Claims Rule Language Primer </A> </P> <BR /> <P> Until next time, </P> <BR /> <P> Joji "Claim Jumper" Oshima </P> </BODY></HTML> Fri, 05 Apr 2019 02:50:45 GMT Ryan Ries 2019-04-05T02:50:45Z Circle Back to Loopback <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Feb 08, 2013 </STRONG> <BR /> <P> Hello again! <A href="#" target="_blank"> Kim Nichols </A> here again.&nbsp; For this post, I'm taking a break from the AD LDS discussions (hold your applause until the end) and going back to a topic near and dear to my heart - Group Policy loopback processing. </P> <BR /> <P> Loopback processing is not a new concept to Group Policy, but it still causes confusion for even the most experienced Group Policy administrators. </P> <BR /> <P> This post is the first part of a two part blog series on User Group Policy Loopback processing. </P> <BR /> <UL> <BR /> <LI> <BR /> <DIV> Part 1 provides a general Group Policy refresher and introduces Loopback processing </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Part 2 covers Troubleshooting Group Policy loopback processing </DIV> <BR /> </LI> <BR /> </UL> <BR /> <P> Hopefully these posts will refresh your memory and provide some tips for troubleshooting Group Policy processing when loopback is involved. </P> <BR /> <H1> Part 1: Group Policy and Loopback processing refresher </H1> <BR /> <H2> Normal Group Policy Processing </H2> <BR /> <P> Before we dig in too deeply, let's quickly cover normal Group Policy processing.&nbsp; Thinking back to when we first learned about Group Policy processing, we learned that Group Policy <BR /> applies in the following order: </P> <BR /> <OL> <BR /> <LI> <BR /> <DIV> Local Group Policy </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Site </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Domain </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> OU </DIV> <BR /> </LI> <BR /> </OL> <BR /> <P> You may have heard Active Directory “old timers” refer to this as <A href="#" target="_blank"> LSDOU </A> .&nbsp; As a result of LSDOU, settings from GPOs linked closest (lower in OU structure) to the user take precedence over those linked farther from the user (higher in OU structure).&nbsp;GPO configuration options such as <STRONG> Block Inheritance </STRONG> and <STRONG> Enforced </STRONG> (previously called No Override for you old school admins) can modify processing as well, but we will keep things simple for the purposes of this example.&nbsp; Normal user group policy processing applies user settings from GPOs linked to the Site, Domain, and OU containing the user object regardless of the location of the computer object in Active Directory. </P> <BR /> <P> Let's use a picture to clarify this.&nbsp; For this example, the user is the "E" OU and the computer is in the "G" OU of the domain. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Following normal group policy processing rules (assuming all policies apply to Authenticated Users with no WMI filters or "Block Inheritance" or "Enforced" policies), user settings of Group Policy objects apply in the following order: </P> <BR /> <OL> <BR /> <LI> <BR /> <DIV> Local Computer Group Policy </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policies linked to the Site </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policies linked to the Domain ( </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policies linked to OU "A" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policies linked to OU "B" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policies linked to OU "E" </DIV> <BR /> </LI> <BR /> </OL> <BR /> <P> That’s pretty straightforward, right?&nbsp; Now, let’s move on to loopback processing! </P> <BR /> <H2> What is loopback processing? </H2> <BR /> <P> Group Policy loopback is a computer configuration setting that enables different Group Policy user settings to apply based upon the computer from which logon occurs. </P> <BR /> <P> Breaking this down a little more: </P> <BR /> <OL> <BR /> <LI> <BR /> <DIV> It is a computer configuration setting. (Remember this for later) </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> When enabled, user settings from GPOs applied to the computer apply to the logged on user. </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Loopback processing changes the list of applicable GPOs and the order in which they apply to a user. </DIV> <BR /> </LI> <BR /> </OL> <BR /> <H2> Why would I use loopback processing? </H2> <BR /> <P> Administrators use loopback processing in kiosk, lab, and Terminal Server environments to provide a consistent user experience across all computers regardless of the GPOs linked to user's OU. </P> <BR /> <P> Our recommendation for loopback is similar to our recommendations for WMI filters, Block Inheritance and policy Enforcement; use them sparingly.&nbsp; All of these configuration options modify the default processing of policy and thus make your environment more complex to troubleshoot and maintain. As I've mentioned in other posts, whenever possible, keep your designs as simple as possible. You will save yourself countless nights/weekends/holidays in the office because will you be able to identify configuration issues more quickly and easily. </P> <BR /> <H2> How to configure loopback processing </H2> <BR /> <P> The loopback setting is located under <STRONG> Computer Configuration/Administrative Templates/System/Group Policy </STRONG> in the Group Policy Management Editor (GPME). </P> <BR /> <P> Use the policy setting <STRONG> Configure user Group Policy loopback processing mode </STRONG> to configure loopback in Windows 8 and Windows Server 2012 <STRONG> . </STRONG> Earlier versions of Windows have the same policy setting under the name <STRONG> User Group Policy loopback processing mode. </STRONG> The screenshot below is from the Windows 8 version of the GPME. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> When you enable loopback processing, you also have to select the desired mode.&nbsp; There are two modes for loopback processing:&nbsp; Merge or Replace. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <H2> Loopback Merge vs. Replace </H2> <BR /> <P> Prior to the start of user policy processing, the Group Policy engine checks to see if loopback is enabled and, if so, in which mode. </P> <BR /> <P> We'll start off with an explanation of Merge mode since it builds on our existing knowledge of user policy processing. </P> <BR /> <H3> Loopback Merge </H3> <BR /> <P> During loopback processing in merge mode, user GPOs process first (exactly as they do during normal policy processing), but with an additional step.&nbsp; Following normal user policy processing the Group Policy engine applies user settings from GPOs linked to the computer's OU. &nbsp;The result-- the user receives all user settings from GPOs applied to the user and all user settings from GPOs applied to the computer. The user settings from the computer’s GPOs win any conflicts since they apply last. </P> <BR /> <P> To illustrate loopback merge processing and conflict resolution, let’s use a simple chart.&nbsp; The chart shows us the “winning”&nbsp;configuration in each of three scenarios: </P> <BR /> <UL> <BR /> <LI> <BR /> <DIV> The same user policy setting is configured in GPOs linked to the user and the computer </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> The user policy setting is only configured in a GPO linked to the user’s OU </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> The user policy setting is only configured in a GPO linked to the computer’s OU </DIV> <BR /> </LI> <BR /> </UL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Now, going back to our original example, loopback processing in Merge mode applies user settings from GPOs linked to the user’s OU followed by user settings from GPOs linked to the computer’s OU. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> GPOs for the user in OU ”E” apply in the following order (the first part is identical to normal user policy processing from our original example): </P> <BR /> <OL> <BR /> <LI> <BR /> <DIV> Local Group Policy </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to the Site </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to the Domain </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "A" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "B" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "E" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to the Site </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to the Domain </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "A" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "C" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "G" </DIV> <BR /> </LI> <BR /> </OL> <BR /> <H3> Loopback Replace </H3> <BR /> <P> Loopback replace is much easier.&nbsp;During loopback processing in replace mode, the user settings applied to the computer “replace” those applied to the user.&nbsp; In actuality, the Group Policy service skips the GPOs linked to the user’s OU. Group Policy effectively processes as if user object was in the OU of the computer rather than its current OU. </P> <BR /> <P> The chart for loopback processing in replace mode shows that settings “1” and “2” do not apply since all user settings linked to the user’s OU are skipped when loopback is configured in replace mode. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Returning to our example of the user in the “E” OU, loopback processing in replace mode skips normal user policy processing and only applies user settings from GPOs linked to the computer. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The resulting processing order is: </P> <BR /> <OL> <BR /> <LI> <BR /> <DIV> Local Group Policy </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to the Site </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to the Domain </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "A" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "C" </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Group Policy objects linked to OU "G" </DIV> <BR /> </LI> <BR /> </OL> <BR /> <H2> Recap </H2> <BR /> <OL> <BR /> <LI> <BR /> <DIV> User Group Policy loopback processing is a computer configuration setting. </DIV> <BR /> </LI> <BR /> </OL> <UL> <BR /> <LI> <BR /> <DIV> Loopback processing <STRONG> is not </STRONG> specific to the GPO in which it is configured. If we think back to what an Administrative Template policy is, we know it is just configuring a registry value.&nbsp; In the case of the loopback policy processing setting, once this registry setting is configured, the order and scope of user group policy processing for all users logging on to the computer is modified per the mode chosen: Merge or Replace. </DIV> <BR /> </LI> <BR /> </UL> <BR /> <LI> <BR /> <DIV> Merge mode applies GPOs linked to the user object first, followed by GPOs with user settings linked to the computer object. </DIV> <BR /> </LI> <BR /> <UL> <BR /> <LI> <BR /> <DIV> The order of processing determines the precedence. GPOs with users settings linked to the computer object apply last and therefore have a higher precedence than those linked to the user object. </DIV> <BR /> </LI> <BR /> <LI> <BR /> <DIV> Use merge mode in scenarios where you need users to receive the settings they normally receive, but you want to customize or make changes to those settings when they logon to specific computers. </DIV> <BR /> </LI> <BR /> </UL> <BR /> <LI> <BR /> <DIV> Replace mode completely skips Group Policy objects linked in the path of the user and only applies user settings in GPOs linked in the path of the computer.&nbsp; Use replace mode when you need to disregard all GPOs that are linked in the path of the user object. </DIV> <BR /> </LI> <BR /> <BR /> <P> Those are the basics of user group policy loopback processing. In my next post, I'll cover the troubleshooting process when loopback is enabled. </P> <BR /> <P> Kim “Why does it say paper jam, when there is no paper jam!?” Nichols </P> <BR /> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 02:50:32 GMT Ryan Ries 2019-04-05T02:50:32Z Distributed File System Consolidation of a Standalone Namespace to a Domain-Based Namespace <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Feb 06, 2013 </STRONG> <BR /> <P> Hello again everyone! <A href="#" target="_blank"> David </A> here to discuss a scenario that is becoming more and more popular for administrators of Distributed File System Namespaces (DFSN): consolidation of one or more standalone namespaces that are referenced by a domain-based namespace. Below I detail how this may be achieved. </P> <BR /> <H2> History: Why create interlinked namespaces? </H2> <BR /> <P> First, we should quickly review the history of why so many administrators designed interlinked namespaces. </P> <BR /> <P> In Windows Server 2003 (and earlier) versions of DFSN, domain-based namespaces were limited to hosting approximately 5,000 DFS folders per namespace. This limitation was simply due to how the Active Directory JET database engine stored a single binary value of an attribute. We now refer to this type of namespace as "Windows 2000 Server Mode". Standalone DFS namespaces (those stored locally in the registry of a single namespace server or server cluster) are capable of approximately 50,000 DFS folders per namespace. Administrators would therefore create thousands of folders in a standalone namespace and then <A href="#" target="_blank"> interlink </A> (cascade) it with a domain-based namespace. This allowed for a single, easily identifiable entry point of the domain-based namespace and leveraged the capacity of the standalone namespaces. </P> <BR /> <P> " <A href="#" target="_blank"> Windows Server 2008 mode </A> " namespaces allow for domain-based namespaces of many thousands of DFS folders per namespace (look <A href="#" target="_blank"> here </A> for scalability test results). With many Active Directory deployments currently capable of supporting 2008 mode namespaces, Administrators are wishing to remove their dependency on the standalone namespaces and roll them up into a single domain-based namespace. Doing so will improve referral performance, improve fault-tolerance of the namespace, and ease administration. </P> <BR /> <H2> How to consolidate the namespaces </H2> <BR /> <P> Below are the steps required to consolidate one or more standalone namespaces into an existing domain-based namespace. The foremost goal of this process is to maintain identical UNC paths after the consolidation so that no configuration changes are needed for clients, scripts, or anything else that references the current interlinked namespace paths. Because so many design variations exist, you may only require a subset of the operations or you may have to repeat some procedures multiple times. If you are not concerned with maintaining identical UNC paths, then this blog does not really apply to you. </P> <BR /> <P> For demonstration purposes, I will perform the consolidation steps on a namespace with the following configuration: </P> <BR /> <P> Domain-based Namespace: <A> \\\data </A> <BR /> DFS folder: "reporting" (targeting the standalone namespace "reporting" below) <BR /> Standalone Namespace: <A> \\server1\reporting </A> <BR /> DFS folders: "report####" (totaling 10,000 folders) </P> <BR /> <P> Below is what these namespaces look like in the DFS Management MMC. </P> <BR /> <P> Domain Namespace DATA: <BR /> <IMG src="" /> </P> <BR /> <P> Standalone Namespace "Reporting" hosted by server "FS1" and has 15,000 DFS folders: <BR /> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> For a client to access a file in the "report8000" folder in the current DFS design, the client must access the following path: <BR /> <A> \\\data\reporting\report8000 </A> <BR /> <BR /> <IMG src="" /> </P> <BR /> <P> <BR /> Below are the individual elements of that UNC path with descriptions below each: </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> \\ </P> <BR /> </TD> <TD> <BR /> <P> \Data </P> <BR /> </TD> <TD> <BR /> <P> \Reporting </P> <BR /> </TD> <TD> <BR /> <P> \Reporting8000 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Domain </P> <BR /> </TD> <TD> <BR /> <P> Domain-based Namespace </P> <BR /> </TD> <TD> <BR /> <P> Domain-Based Namespace folder </P> <BR /> </TD> <TD> </TD> </TR> <TR> <TD> </TD> <TD> </TD> <TD> <BR /> <P> Standalone Namespace </P> <BR /> </TD> <TD> <BR /> <P> Standalone Namespace folder targeting a file server share </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> <BR /> Note the overlap of the domain-based namespace folder "reporting" (dark green) with the standalone namespace "reporting" (light green). Each item in the UNC path is separated by a "\" and is known as a "path component". </P> <BR /> <P> In order to preserve the UNC path using a single domain-based namespace we must leverage the ability for DFSN to host multiple path components within a single DFS folder. Currently, the "reporting" DFS folder of the domain-based namespace refers clients to the standalone namespace that contains DFS folders, such as "reporting8000", beneath it. To consolidate those folders of the standalone root to the domain-based namespace, we must merge them together. </P> <BR /> <P> To illustrate this, below is how the new consolidated "Data" domain-based namespace will be structured for this path: </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <A> \\ </A> </P> <BR /> </TD> <TD> <BR /> <P> \Data </P> <BR /> </TD> <TD> <BR /> <P> \Reporting\Reporting8000 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Domain </P> <BR /> </TD> <TD> <BR /> <P> Domain-based Namespace </P> <BR /> </TD> <TD> <BR /> <P> Domain-based Namespace folder targeting a file server share </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> <BR /> Notice how the name of the DFS folder is "Reporting\Reporting8000" and includes two path components separated by a "\". This capability of DFSN is what allows for the creation of any desired path. When users access the UNC path, they ultimately will still be referred to the target file server(s) containing the shared data. "Reporting" is simply a placeholder serving to maintain that original path component. </P> <BR /> <H2> Step-by-step </H2> <BR /> <P> Below are the steps and precautions for consolidating interlinked namespaces. It is highly recommended to put a temporary suspension on any administrative changes to the standalone namespace(s). </P> <BR /> <P> <STRONG> Assumptions: </STRONG> <BR /> The instructions assume that you have already met the <A href="#" target="_blank"> requirements </A> for "Windows Server 2008 mode" namespaces and your domain-based namespace is currently running in "Windows 2000 Server mode". </P> <BR /> <P> However, if you have not met these requirements and have a "Windows 2000 Server mode" domain-based namespace, these instructions (with modifications) may still be applied <STRONG> *if* </STRONG> after consolidation the domain-based namespace configuration data is less than 5 MB in size. If you are unsure of the size, you may run the "dfsutil /root:\\&lt;servername&gt;\&lt;namespace_name&gt; /view" command against the standalone namespace and note the size listed at the top (or bottom) of the output. The reported size will be added to the current size of the domain-based namespace and must not exceed 5 MB. Cease any further actions if you are unsure, or test the operations in a lab environment. Of course, if your standalone namespace size was less than 5 MB in size, then why did you create a interlinked namespace to begin with? Eh…I'm not really supposed to ask these questions. Moving on… </P> <BR /> <H3> Step 1 </H3> <BR /> <P> Export the standalone namespace. </P> <BR /> <P> Dfsutil root export <A> \\fs1\reporting </A> c:\exports\reporting_namespace.txt </P> <BR /> <H3> Step 2 </H3> <BR /> <P> Modify the standalone namespace export file using a text editor capable of search-and-replace operations. Notepad.exe has this capability. This export file will be leveraged later to create the proper folders within the domain-based namespace. <BR /> <BR /> Replace the "Name" element of the standalone namespace with the name of the domain-based namespace and replace the "Target" element to be the UNC path of the domain-based namespace server (the one you will be configuring later in step 6). Below, I highlighted the single " <A> \\FS1\reporting </A> " 'name' element that will be replaced with " <A> \\TAILSPINTOYS.COM\DATA </A> ". The single " <A> \\FS1\reporting </A> " element immediately below it will be replaced with " <A> \\DC1\DATA </A> " as "DC1" is my namespace server. <BR /> <IMG src="" /> <BR /> <BR /> Next, prepend "Reporting\" to the folder names listed in the export. The final result will be as follows: <BR /> <IMG src="" /> </P> <BR /> <P> One trick is to utilize the 'replace' capability of Notepad.exe to search out and replace all instances of the '&lt;Link Name="' string with '&lt;Link Name="folder\' ('&lt;Link Name="Reporting\' in this example). The picture below shows the original folders defined and the 'replace' dialog responsible for changing the names of the folders (click 'Replace all' to replace all occurrences). <BR /> <IMG src="" /> <BR /> <BR /> Save the modified file with a new filename (reporting_namespace_modified.txt) so as to not overwrite the standalone namespace export file. </P> <BR /> <H3> Step 3 </H3> <BR /> <P> Export the domain-based namespace <BR /> dfsutil root export <A> \\\data </A> c:\exports\data_namespace.txt </P> <BR /> <H3> Step 4 </H3> <BR /> <P> Open the output file from Step 3 and delete the link that is being consolidated ("Reporting"): <BR /> <IMG src="" /> </P> <BR /> <P> Save the file as a separate file (data_namespace_modified.txt). This export will be utilized to recreate the <STRONG> *other* </STRONG> DFS folders within the "Windows Server 2008 Mode" domain-based namespace that do not require consolidation. </P> <BR /> <H3> Step 5 </H3> <BR /> <P> This critical step involves deleting the existing domain-based namespace. This is required for the conversion from "Windows 2000 Server Mode" to "Windows Server 2008 Mode". </P> <BR /> <P> Delete the domain-based namespace ("DATA" in this example). <BR /> <IMG src="" /> </P> <BR /> <H3> Step 6 </H3> <BR /> <P> Recreate the "DATA" namespace, specifying the mode as "Windows Server 2008 mode". Specify the namespace server to be a namespace server with close network proximity to the domain's PDC. This will significantly decrease the time it takes to import the DFS folders. Additional namespace servers may be added any time after Step 8. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <H3> Step 7 </H3> <BR /> <P> Import the modified export file created in Step 4: <BR /> dfsutil root import merge data_namespace_modified.txt \\\data <BR /> <BR /> In this example, this creates the "Business" and "Finance" DFS folders: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <H3> Step 8 </H3> <BR /> <P> Import the modified namespace definition file created in Step 2 to create the required folders (note that this operation may take some time depending on network latencies and other factors): <BR /> dfsutil root import merge reporting_namespace_modified.txt <A href="#" target="_blank"> \\\DATA </A> </P> <BR /> <P> <BR /> <IMG src="" /> </P> <BR /> <H3> Step 9 </H3> <BR /> <P> Verify the structure of the namespace: <BR /> <IMG src="" /> </P> <BR /> <H3> Step 10 </H3> <BR /> <P> Test the functionality of the namespace. From a client or another server, run the "dfsutil /pktflush" command to purge cached referral data and attempt access to the DFS namespace paths. Alternately, you may reboot clients and attempt access if they do not have dfsutil.exe available. <BR /> <BR /> Below is the result of accessing the "report8000" folder path via the new namespace: <BR /> <IMG src="" /> <BR /> <BR /> Referral cache confirms the new namespace structure (red line highlighting the name of the DFS folder as "reporting\report8000"): <BR /> <IMG src="" /> </P> <BR /> <P> <BR /> At this point, you should have a fully working namespace. If something is not working quite right or there are problems accessing the data, you may return to the original namespace design by deleting all DFS folders in the new domain-based namespace and importing the original namespace from the export file (or recreating the original folders by hand). At no time did we alter the standalone namespaces, so returning to the original interlinked configuration is very easy to accomplish. </P> <BR /> <H3> Step 11 </H3> <BR /> <P> Add the necessary namespace servers to the domain-based namespace to increase fault tolerance. </P> <BR /> <P> Notify all previous administrators of the standalone namespace(s) that they will need to manage the domain-based namespace from this point forward. Once you confident with the new namespace, the original standalone namespace(s) may be retired at any time (assuming no systems on the network are using UNC paths directly to the standalone namespace). </P> <BR /> <H3> Namespace already in "Windows Server 2008 mode"? </H3> <BR /> <P> What would the process be if the domain-based namespace is already running in "Windows Server 2008 mode"? Or, you have already run through the operations once and wish to consolidate additional DFS folders? Some steps remain the same while others are skipped entirely: <BR /> <STRONG> Steps 1-2 </STRONG> (same as detailed previously to export the standalone namespace and modify the export file) <BR /> <STRONG> Step 3 </STRONG> Export the domain-based namespace for backup purposes <BR /> <STRONG> Step 4 </STRONG> Delete the DFS folder targeting the standalone namespace--the remainder of the domain-based namespace will remain unchanged <BR /> <STRONG> Step 8 </STRONG> Import the modified file created in step 2 to the domain-based namespace <BR /> <STRONG> Step 9-10 </STRONG> Verify the structure and function of the namespace </P> <BR /> <H2> Caveats and Concerns </H2> <BR /> <P> Ensure that no data exists in the original standalone namespace server's namespace share. Because clients are now no longer using the standalone namespace, the "reporting" path component exists as a subfolder within each domain-based namespace server's share. Furthermore, hosting data within the namespace share (domain-based or standalone) is not recommended. If this applies to you, consider moving such data into a separate folder within the new namespace and update any references to those files used by clients. </P> <BR /> <P> These operations should be performed during a maintenance window. The length of which is dictated by your efficiency in performing the operations and the length of time it takes to import the DFS namespace export file. Because a namespace is so easily built, modified, and deleted, you may wish to consider a "dry run" of sorts. Prior to deleting your production namespace(s), create a new test namespace (e.g. "DataTEST"), modify your standalone namespace export file (Step 2) to reference this "DataTEST" namespace and try the import. Because you are using a separate namespace, no changes will occur to any other production namespaces. You may gauge the time required for the import, and more importantly, test access to the data ( <A> \\\DataTEST\Reporting\Reporting8000 </A> in my example). If access to the data is successful, then you will have confidence in replacing the real domain-based namespace. </P> <BR /> <P> Clients should not be negatively affected by the restructuring as they will discover the new hierarchy automatically. By default, clients cache namespace referrals for 5 minutes and folder referrals for 30 minutes. It is advisable to keep the standalone namespace(s) operational for at least an hour or so to accommodate transition to the new namespace, but it may remain in place for as long as you wish. <BR /> <BR /> If you decommission the standalone namespace and find some clients are still using it directly, you could easily recreate the standalone namespace from our export in Step 1 while you investigate the client configurations and remove their dependency on it. </P> <BR /> <P> Lastly, if you are taking the time and effort to recreate the namespace for "Windows Server 2008 mode" support, you might as well consider configuring the targets of the DFS folders with DNS names (modify the export files) and also implementing <A href="#" target="_blank"> DFSDnsConfig </A> on the namespace servers. </P> <BR /> <P> I hope this blog eliminates some of the concerns and fears of consolidating interlinked namespaces! </P> <BR /> <P> Dave "King" Fisher </P> </BODY></HTML> Fri, 05 Apr 2019 02:49:25 GMT Ryan Ries 2019-04-05T02:49:25Z Configuring Change Notification on a MANUALLY created Replication partner <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jan 21, 2013 </STRONG> <BR /> <P> Hello. <A href="#" target="_blank"> Jim </A> here again to elucidate on the wonderment of change notification as it relates to Active Directory replication within and between sites. As you know Active Directory replication between domain controllers within the same site (intrasite) happens instantaneously. Active Directory replication between sites (intersite) occurs every 180 minutes (3 hours) by default. You can adjust this frequency to match your specific needs BUT it can be no faster than fifteen minutes when configured via the AD Sites and Services snap-in. </P> <BR /> <P> Back in the old days when remote sites were connected by a string and two soup cans, it was necessary in most cases to carefully consider configuring your replication intervals and times so as not to flood the pipe (or string in the reference above) with replication traffic and bring your WAN to a grinding halt. With dial up connections between sites it was even more important. It remains an important consideration today if your site is a ship at sea and your only connectivity is a satellite link that could be obscured by a cloud of space debris. </P> <BR /> <P> Now in the days of wicked fast fiber links and MPLS VPN Connectivity, change notification may be enabled between site links that can span geographic locations. This will make Active Directory replication instantaneous between the separate sites as if the replication partners were in the same site. Although this is well documented on TechNet and I hate regurgitating existing content, here is how you would configure change notification on a site link: </P> <BR /> <OL> <BR /> <LI> Open ADSIEdit.msc. </LI> <BR /> <LI> In ADSI Edit, expand the Configuration container. </LI> <BR /> <LI> Expand Sites, navigate to the Inter-Site Transports container, and select CN=IP. <BR /> <BR /> Note: You cannot enable change notification for SMTP links. </LI> <BR /> <LI> Right-click the site link object for the sites where you want to enable change notification, e.g. CN=DEFAULTSITELINK, click Properties. </LI> <BR /> <LI> In the Attribute Editor tab, double click on Options. </LI> <BR /> <LI> If the Value(s) box shows &lt;not set&gt;, type 1. </LI> <BR /> </OL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> There is one caveat however. Change notification will fail with manual connection objects. If your connection objects are not created by the KCC the change notification setting is meaningless. If it's a manual connection object, it will NOT inherit the Options bit from the Site Link. Enjoy your 15 minute replication latency. </P> <BR /> <P> Why would you want to keep connection objects you created manually, anyway? Why don't you just let the KCC do its thing and be happy? Maybe you have a Site Link costing configuration that you would rather not change. Perhaps you are at the mercy of your networking team and the routing of your network and you must keep these manual connections. If, for whatever reason you must keep the manually created replication partners, be of good cheer. You can still enjoy the thrill of change notification. </P> <BR /> <P> Change Notification on a manually created replication partner is configured by doing the following: </P> <BR /> <OL> <BR /> <LI> Open ADSIEDIT.msc. </LI> <BR /> <LI> In ADSI Edit, expand the Configuration container. </LI> <BR /> <LI> Navigate to the following location: <BR /> <BR /> \Sites\SiteName\Server\NTDS settings\connection object that was manually created </LI> <BR /> <LI> Right-click on the manually created connection object name. </LI> <BR /> <LI> In the Attribute Editor tab, double click on Options. </LI> <BR /> <LI> If the value is 0 then set it to 8. </LI> <BR /> </OL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> If the value is anything other than zero, you must do some binary math. Relax; this is going to be fun. </P> <BR /> <P> On the Site Link object, it's the 1st bit that controls change notification. On the Connection object, however, it's the 4th bit. The 4th bit is highlighted in <STRONG> RED </STRONG> below represented in binary (You remember binary don't you???) </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> Binary Bit </P> <BR /> </TD> <TD> <BR /> <P> 8th </P> <BR /> </TD> <TD> <BR /> <P> 7th </P> <BR /> </TD> <TD> <BR /> <P> 6th </P> <BR /> </TD> <TD> <BR /> <P> 5th </P> <BR /> </TD> <TD> <BR /> <P> 4th </P> <BR /> </TD> <TD> <BR /> <P> 3rd </P> <BR /> </TD> <TD> <BR /> <P> 2nd </P> <BR /> </TD> <TD> <BR /> <P> 1st </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> Decimal Value </P> <BR /> </TD> <TD> <BR /> <P> 128 </P> <BR /> </TD> <TD> <BR /> <P> 64 </P> <BR /> </TD> <TD> <BR /> <P> 32 </P> <BR /> </TD> <TD> <BR /> <P> 16 </P> <BR /> </TD> <TD> <BR /> <P> 8 </P> <BR /> </TD> <TD> <BR /> <P> 4 </P> <BR /> </TD> <TD> <BR /> <P> 2 </P> <BR /> </TD> <TD> <BR /> <P> 1 </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <P> NOTE: The values represented by each bit in the Options attribute are documented in the <A href="#" target="_blank"> Active Directory Technical Specification </A> . Fair warning! I'm only including that information for the curious. I STRONGLY recommend against setting any of the options NOT discussed specifically in existing documentation or blogs in your production environment. </P> <BR /> <P> Remember what I said earlier? If it's a manual connection object, it will NOT inherit the Options value from the Site Link object. You're going to have to enable change notifications directly on the manually created connection object. </P> <BR /> <P> Take the value of the Options attribute, let's say it is 16. </P> <BR /> <P> Open Calc.exe in Programmer mode, and paste the contents of your options attribute. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Click on Bin, and count over to the 4th bit starting from the right. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> That's the bit that controls change notification on your manually created replication partner. As you can see, in this example it is zero (0), so change notifications are disabled. </P> <BR /> <P> Convert back to decimal and add 8 to it. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Click on Bin, again. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> As you can see above, the bit that controls change notification on the manually created replication partner is now 1. You would then change the Options value in ADSIEDIT from 16 to 24. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Click on Ok to commit the change. </P> <BR /> <P> Congratulations! You have now configured change notification on your manually created connection object. This sequence of events must be repeated for each manually created connection object that you want to include in the excitement and instantaneous gratification of change notification. Keep in mind that in the event something (or many things) gets deleted from a domain controller, you no longer have that window of intersite latency to stop inbound replication on a downstream partner and do an authoritative restore. Plan the configuration of change notifications accordingly. Make sure you take regular backups, and test them occasionally! </P> <BR /> <P> And when you speak of me, speak well… </P> <BR /> <P> Jim " <A href="#" target="_blank"> changes aren't permanent, but change is </A> " Tierney </P> <BR /> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 02:47:12 GMT Ryan Ries 2019-04-05T02:47:12Z ADAMSync + (AD Recycle Bin OR searchFlags) = "FUN" <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jan 09, 2013 </STRONG> <BR /> <P> Hello again ADAMSyncers! <A href="#" target="_blank"> Kim Nichols </A> here again with what promises to be a fun and exciting mystery solving adventure on the joys of ADAMSync and AD Recycle Bin (ADRB) for AD LDS. The goal of this post is two-fold: </P> <BR /> <OL> <BR /> <LI> Explain AD Recycle Bin for AD LDS and how to enable it </LI> <BR /> <LI> Highlight an issue that you may experience if you enable AD Recycle Bin for AD LDS and use ADAMSync </LI> <BR /> </OL> <BR /> <P> I'll start with some background on AD Recycle Bin for AD LDS and then go through a recent mind-boggling scenario from beginning to end to explain why you may not want (or need) to enable AD Recycle Bin if you are planning on using ADAMSync. </P> <BR /> <P> Hold on to your hats! </P> <BR /> <H2> AD Recycle Bin for ADLDS </H2> <BR /> <P> If you're not familiar with AD Recycle Bin and what it can do for you, check out <A href="#" target="_blank"> Ned's </A> prior blog posts or the content available on TechNet. </P> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Active Directory Recycle Bin in Windows Server 2008 R2 </A> </LI> <BR /> <LI> <A href="#" target="_blank"> The AD Recycle Bin: Understanding, Implementing, Best Practices, and Troubleshooting </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Active Directory Recycle Bin Step-by-Step Guide </A> </LI> <BR /> <LI> <BR /> <DIV> <A href="#" target="_blank"> Advanced AD DS Management Using Active Directory Administrative Center (Level 200) </A> </DIV> <BR /> <UL> <BR /> <LI> <BR /> <DIV> Lots of new features in Windows Server 2012 AD Administrative Center in regard to AD Recycle Bin </DIV> <BR /> </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <P> The short version is that AD Recycle Bin is a feature added in Windows Server 2008 R2 that allows Administrators to recover deleted objects without restoring System State backups and performing <A href="#" target="_blank"> authoritative restores </A> of those objects. </P> <BR /> <H2> Requirements for AD Recycle Bin </H2> <BR /> <P> </P> <BR /> <P> To enable AD Recycle Bin (ADRB) for AD DS your forest needs to meet some basic requirements: </P> <BR /> <P> </P> <BR /> <OL> <BR /> <LI> <A href="#" target="_blank"> Have extended your schema to Windows Server 2008 R2. </A> </LI> <BR /> <LI> Have only Windows Server 2008 R2 DC's in your forest. </LI> <BR /> <LI> Raise your domain(s) functional level to Windows Server 2008 R2. </LI> <BR /> <LI> Raise your forest's functional level to Windows Server 2008 R2. </LI> <BR /> </OL> <BR /> <P> </P> <BR /> <P> What you may not be aware of is that AD LDS has this feature as well. The <A href="#" target="_blank"> requirements </A> for implementing ADRB in AD LDS are the same as AD DS although they are not as intuitive for AD LDS instances. </P> <BR /> <P> </P> <BR /> <H2> Schema must be Windows Server 2008 R2 </H2> <BR /> <P> </P> <BR /> <P> If your AD LDS instance was originally built as an ADAM instance, then you may or may not have extended the schema of your instance to Windows Server 2008 R2. If not, upgrading the schema is a necessary first step in order to support ADRB functionality. </P> <BR /> <P> </P> <BR /> <P> To update your AD LDS schema to Windows Server 2008 R2, run the following command from your ADAM installation directory on your AD LDS server: </P> <BR /> <P> </P> <BR /> <P> Ldifde.exe –i –f MS-ADAM-Upgrade-2.ldf –s server:port –b username domain password –j . -$ </P> <BR /> <P> </P> <BR /> <P> You'll also want to update your configuration partition: </P> <BR /> <P> </P> <BR /> <P> ldifde –i –f ms-ADAM-Upgrade-1.ldf –s server:portnumber –b username domain password –k –j . –c "CN=Configuration,DC=X" #configurationNamingContext </P> <BR /> <P> </P> <BR /> <P> Information on these commands can be found on TechNet: </P> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Appendix B: Upgrading from ADAM to AD LDS </A> </LI> <BR /> <LI> <BR /> <DIV> <A href="#" target="_blank"> Requirements for Active Directory Recycle Bin </A> </DIV> <BR /> <P> </P> <BR /> </LI> <BR /> </UL> <BR /> <H2> Decommission any Windows Server 2003 ADAM servers in the Replica set </H2> <BR /> <P> </P> <BR /> <P> In an AD DS environment, ADRB requires that all domain controllers in the forest be running Windows Server 2008 R2. Translating this to an AD LDS scenario, all servers in your replica set must be running Windows Server 2008 R2. So, if you've been hanging on to those Windows Server 2003 ADAM servers for some reason, now is the time to decommission them. </P> <BR /> <P> </P> <BR /> <P> <A href="#" target="_blank"> LaNae </A> 's blog " <A href="#" target="_blank"> How to Decommission an ADAM/ADLDS server and Add Additional Servers </A> " explains the process for removing a replica member. The process is pretty straightforward and just involves uninstalling the instance, but you will want to check FSMO role ownership, overall instance health, and application configurations before blindly uninstalling. Now is not the time to discover applications have been hard-coded to point to your Windows Server 2003 server or that you've been unknowingly been having replication issues. </P> <BR /> <P> </P> <BR /> <H2> Raise the functional level of the instance </H2> <BR /> <P> </P> <BR /> <P> In AD DS, raising the domain and forest functional levels is easy; there's a UI -- AD Domains and Trusts. AD LDS doesn't have this snap-in, though, so it is a little more complicated. There's a good KB article ( <A href="#" target="_blank"> 322692 </A> ) that details the process of raising the functional levels of AD and gives us insight into what we need to do raise our AD LDS functional level since we can't use the AD Domains and Trusts MMC. </P> <BR /> <P> </P> <BR /> <P> AD LDS only has the concept of forest functional levels. There is no domain functional level in AD LDS. The forest functional level is controlled by the <STRONG> msDS-Behavior-Version </STRONG> attribute on the CN=Partitions object in the Configuration naming context of your AD LDS instance. </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> Simply changing the value of <STRONG> msDS-Behavior-Version </STRONG> from 2 to 4 will update the functional level of your instance from Windows Server 2003 to Windows Server 2008 R2. Alternatively, you can use Windows PowerShell to upgrade the functional level of your AD LDS instance. For AD DS, there is a dedicated Windows PowerShell cmdlet for raising the forest functional level called Set-ADForestMode, but this cmdlet is <A href="#" target="_blank"> not supported </A> for AD LDS. To use Windows PowerShell to raise the functional level for AD LDS, you will need to use the Set-ADObject cmdlet to specify the new value for the <STRONG> msDS-Behavior-Version </STRONG> attribute. </P> <BR /> <P> </P> <BR /> <P> To raise the AD LDS functional level using Windows PowerShell, run the following command (after loading the AD module): </P> <BR /> <P> </P> <BR /> <P> Set-ADObject -Identity &lt;path to Partitions container in Configuration Partition of instance&gt; -Replace @{'msds-Behavior-Version'=4} -Server &lt;server:port&gt; </P> <BR /> <P> </P> <BR /> <P> For example in my environment, I ran: </P> <BR /> <P> </P> <BR /> <P> Set-ADObject -Identity 'CN=Partitions,CN=Configuration,CN={A1D2D2A9-7521-4068-9ACC-887EDEE90F91}' -Replace @{'msDS-Behavior-Version'=4} -Server 'localhost:50000' </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> As always, before making changes to your production environment: </P> <BR /> <OL> <BR /> <LI> Test in a TEST or DEV environment </LI> <BR /> <LI> Have good back-ups </LI> <BR /> <LI> Verify the general health of the environment (check replication, server health, etc) </LI> <BR /> </OL> <BR /> <P> </P> <BR /> <P> Now we're ready to enable AD Recycle Bin! </P> <BR /> <H2> Enabling AD Recycle Bin for AD LDS </H2> <BR /> <P> </P> <BR /> <P> For Windows Server 2008 R2, the process for enabling <A href="#" target="_blank"> ADRB in AD LDS </A> is nearly identical to that for AD DS. Either Windows PowerShell or LDP can be used to enable the feature. Also, there is no UI for enabling ADRB for AD LDS in Windows Server 2008 R2 or Windows Server 2012. Windows Server 2012 does add the ability to enable ADRB and restore objects through the AD Administrative Center for AD DS (you can read about it <A href="#" target="_blank"> here </A> ), but this UI does not work for AD LDS instances on Windows Server 2012. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Once the feature is enabled, it cannot be disabled. So, before you continue, be certain you really want to do this. (Read this whole post to help you decide.) </STRONG> </P> <BR /> <P> </P> <BR /> <P> The ADRB can be enabled in both AD DS and AD LDS using a PowerShell cmdlet, but the syntax is slightly different between the two. The difference is fully documented in <A href="#" target="_blank"> TechNet </A> . </P> <BR /> <P> </P> <BR /> <P> In my lab, I used the PowerShell cmdlet to enable the feature rather than using LDP. Below is the syntax for AD LDS: </P> <BR /> <P> </P> <BR /> <P> Enable-ADOptionalFeature 'recycle bin feature' -Scope ForestOrConfigurationSet -Server &lt;server:port&gt; -Target &lt;DN of configuration partition&gt; </P> <BR /> <P> </P> <BR /> <P> Here's the actual cmdlet I used and a screenshot of the output. The cmdlet asks you confirm that you want to enable the feature since this is an <STRONG> irreversible process </STRONG> . </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> You can verify that the command worked by checking the <STRONG> msDS-EnabledFeature </STRONG> attribute on the Partitions container of the Configuration NC of your instance. </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <H2> Seemed like a good idea at the time. . . </H2> <BR /> <P> </P> <BR /> <P> Now, on to what prompted this post in the first place. </P> <BR /> <P> </P> <BR /> <P> Once ADRB is enabled, there is a change to how deleted objects are handled when they are removed from the directory. Prior to enabling ADRB when an object is deleted, it is moved to the Deleted Objects container within the application partition of your instance (CN=Deleted Objects, DC=instance1, DC=local or whatever the name of your instance is) and most of the attributes are deleted. Without Recycle Bin enabled, a user object in the Deleted Object container looks like this in LDP: </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> After enabling ADRB, a deleted user object looks like this in LDP: </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> Notice that after enabling ADRB, <STRONG> givenName </STRONG> , <STRONG> displayName </STRONG> , and several other attributes including <STRONG> userPrincipalName </STRONG> (UPN) are maintained on the object while in the Deleted Objects container. This is great if you ever need to restore this user: most of the data is retained and it's a pretty simple process <A href="#" target="_blank"> using LDP </A> or <A href="#" target="_blank"> PowerShell </A> to reanimate the object without the need to go through the authoritative restore process. But, retaining the UPN attribute specifically can cause issues if ADAMSync is being used to synchronize objects from AD DS to AD LDS since the <STRONG> userPrincipalName </STRONG> attribute must be unique within an AD LDS instance. </P> <BR /> <P> </P> <BR /> <P> In general, the recommendation when using ADAMSync, is to perform all user management (additions/deletions) on the AD DS side of the sync and let the synchronization process handle the edits in AD LDS. There are times, though, when you may need to remove users in AD LDS in order to resolve synchronization issues and this is where having ADRB enabled will cause problems. </P> <BR /> <P> </P> <BR /> <P> For example: </P> <BR /> <P> </P> <BR /> <P> Let's say that you discover that you have two users with the same <STRONG> userPrincipalName </STRONG> in AD and this is causing issues with ADAMSync: the infamous ATT_OR_VALUE_EXISTS error in the ADAMSync log. </P> <BR /> <P> </P> <BR /> <P> ==================================================== </P> <BR /> <P> Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0 Processing source entry &lt;guid=fe36238b9dd27a45b96304ea820c82d8&gt; Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8. </P> <BR /> <P> </P> <BR /> <P> Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1: </P> <BR /> <P> 0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName) </P> <BR /> <P> </P> <BR /> <P> . Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1: </P> <BR /> <P> 0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName) </P> <BR /> <P> =============================================== </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> Upon further inspection of the users, you determine that at some point a copy was made of the user's account in AD and the UPN was not updated. The old account is not needed anymore but was never cleaned up either. To get your ADAMSync working, you: </P> <BR /> <OL> <BR /> <LI> Delete the user account that synced to AD LDS. </LI> <BR /> <LI> Delete the extra account in AD (or update the UPN on one of the accounts). </LI> <BR /> <LI> Try to sync again </LI> <BR /> </OL> <BR /> <P> </P> <BR /> <P> BWAMP! </P> <BR /> <P> </P> <BR /> <P> The sync still fails with the ATT_OR_VALUE_EXISTS error on the same user. This doesn't make sense, right? You deleted the extra user in AD and cleaned up AD LDS by deleting the user account there. There should be no duplicates. The ATT_OR_VALUE_EXISTS error is not an ADAMSync error. ADAMSync is making LDAP calls to the AD LDS instance to create or modify objects. This error is an LDAP error from the AD LDS instance and is telling you already have an object in the directory with that same <STRONG> userPrincipalName </STRONG> . For what it's worth, I've never seen this error logged if the duplicate isn't there. It is there; you just have to find it! </P> <BR /> <P> </P> <BR /> <P> At this point, it's not hard to guess where the duplicate is coming from, since we've already discussed ADRB and the attributes maintained on deletion. The duplicate <STRONG> userPrincipalName </STRONG> is coming from the object we deleted from the AD LDS instance and is located in the Deleted Objects container. The good news is that LDP allows you to browse the container to find the deleted object. If you've never used LDP before to look through the Deleted Objects container, TechNet provides information on how to <A href="#" target="_blank"> browse for deleted objects via LDP </A> . </P> <BR /> <P> </P> <BR /> <P> It's great that we know why we are having the problem, but how do we fix it? Now that we're already in this situation, the only way to fix it is to eliminate the duplicate UPN from the object in CN=Deleted Objects. To do this: </P> <BR /> <P> </P> <BR /> <OL> <BR /> <LI> <A href="#" target="_blank"> Restore the deleted object </A> in AD LDS using LDP or PowerShell </LI> <BR /> <LI> After the object is restored, modify the UPN to something bogus that will never be used on a real user </LI> <BR /> <LI> Delete the object again </LI> <BR /> <LI> Run ADAMSync again </LI> <BR /> </OL> <BR /> <P> </P> <BR /> <P> Now your sync should complete successfully! </P> <BR /> <H2> Not so fast, you say . . . </H2> <BR /> <P> </P> <BR /> <P> So, I was feeling pretty good about myself on this case. I spent hours figuring out ADRB for AD LDS and setting up the repro in my lab and proving that deleting objects with ADRB enabled could cause ATT_OR_VALUE_EXISTS errors during ADAMSync. I was already patting myself on the back and starting my victory lap when I got an email back from my customer stating the <STRONG> msDS-BehaviorVersion </STRONG> attribute on their AD LDS instance was still set to 2. </P> <BR /> <P> </P> <BR /> <P> Huh?! </P> <BR /> <P> </P> <BR /> <P> I'll admit it, I was totally confused. How could this be? I had LDP output from the customer's AD LDS instance and could see that the <STRONG> userPrincipalName </STRONG> attribute was being maintained on objects in the Deleted Objects container. I knew from my lab that this is not normal behavior when ADRB is disabled. So, what the heck is going on? </P> <BR /> <P> </P> <BR /> <P> I know when I'm beat, so decided to use one of my "life lines" . . . I emailed <A href="#" target="_blank"> Linda Taylor </A> . Linda is an Escalation Engineer in the UK Directory Services team and has been working with ADAM and AD LDS much longer than I have. This is where I should include a picture of Linda in a cape because she came to the rescue again! </P> <BR /> <P> </P> <BR /> <P> Apparently, there is more than one way for an attribute to be maintained on deletion. The most obvious was that ADRB had been enabled. The less obvious requires a better understanding of what actually happens when an object is deleted. <A href="#" target="_blank"> Transformation into a Tombstone </A> documents this process in more detail. The part that is important to us is: </P> <BR /> <UL> <BR /> <LI> <BR /> <DIV> All attribute values are removed from the object, with the following exceptions: </DIV> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> nTSecurityDescriptor </A> , <A href="#" target="_blank"> attributeID </A> , <A href="#" target="_blank"> attributeSyntax </A> , <A href="#" target="_blank"> dNReferenceUpdate </A> , <A href="#" target="_blank"> dNSHostName </A> , <A href="#" target="_blank"> flatName </A> , <A href="#" target="_blank"> governsID </A> , <A href="#" target="_blank"> groupType </A> , <A href="#" target="_blank"> instanceType </A> , <A href="#" target="_blank"> lDAPDisplayName </A> , <A href="#" target="_blank"> legacyExchangeDN </A> , <A href="#" target="_blank"> mS-DS-CreatorSID </A> , <A href="#" target="_blank"> mSMQOwnerID </A> , <A href="#" target="_blank"> nCName </A> , <A href="#" target="_blank"> objectClass </A> , <A href="#" target="_blank"> distinguishedName </A> , <A href="#" target="_blank"> objectGUID </A> , <A href="#" target="_blank"> objectSid </A> , <A href="#" target="_blank"> oMSyntax </A> , <A href="#" target="_blank"> proxiedObjectName </A> , <A href="#" target="_blank"> name </A> , <A href="#" target="_blank"> replPropertyMetaData </A> , <A href="#" target="_blank"> sAMAccountName </A> , <A href="#" target="_blank"> securityIdentifier </A> , <A href="#" target="_blank"> sIDHistory </A> , <A href="#" target="_blank"> subClassOf </A> , <A href="#" target="_blank"> systemFlags </A> , <A href="#" target="_blank"> trustPartner </A> , <A href="#" target="_blank"> trustDirection </A> , <A href="#" target="_blank"> trustType </A> , <A href="#" target="_blank"> trustAttributes </A> , <A href="#" target="_blank"> userAccountControl </A> , <A href="#" target="_blank"> uSNChanged </A> , <A href="#" target="_blank"> uSNCreated </A> , <A href="#" target="_blank"> whenCreated </A> attribute values are retained. </LI> <BR /> <LI> In AD LDS, the <A href="#" target="_blank"> msDS-PortLDAP </A> attribute is also retained. </LI> <BR /> <LI> The attribute that equals the <A href="#" target="_blank"> rdnType </A> of the object (for example, <A href="#" target="_blank"> cn </A> for a <A href="#" target="_blank"> user </A> object) is retained. </LI> <BR /> <LI> Any attribute that has fPRESERVEONDELETE flag set in its <A href="#" target="_blank"> searchFlags </A> is retained, except <A href="#" target="_blank"> objectCategory </A> and <A href="#" target="_blank"> sAMAccountType </A> , which are always removed, regardless of the value of their <A href="#" target="_blank"> searchFlags </A> . </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <P> The Schema Management snap-in doesn't allow us to see attributes on attributes, so to verify the value of <STRONG> searchFlags </STRONG> on the <STRONG> userPrincipalName </STRONG> attribute we need to ADSIEdit or LDP. </P> <BR /> <P> </P> <BR /> <P> <STRONG> WARNING: Modifying the schema can have unintended consequences. Please be certain you really need to do this before proceeding and always test first! </STRONG> </P> <BR /> <P> </P> <BR /> <P> By default, the <STRONG> searchFlags </STRONG> attribute on <STRONG> userPrincipalName </STRONG> should be set to 0x1 (INDEX). </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> <P> My customer's <STRONG> searchFlags </STRONG> attribute was set to 0x1F (31 decimal) = (INDEX |CONTAINER_INDEX |ANR |PRESERVE_ON_DELETE |COPY). </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> Apparently these changes to the schema had been made to improve query efficiency when searching on the <STRONG> userPrincipalName </STRONG> attribute. </P> <BR /> <P> </P> <BR /> <P> <STRONG> Reminder: Manually modifying the schema in this way is not something you should doing unless are certain you know what you are doing or have been directed to by Microsoft Support. </STRONG> </P> <BR /> <P> </P> <BR /> <P> The <STRONG> searchFlags </STRONG> attribute is a bitwise attribute containing a number of different options which are outlined <A href="#" target="_blank"> here </A> . This attribute can be zero or a combination of one or more of the following values: </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Value </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> Description </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 1 (0x00000001) </P> <BR /> </TD> <TD> <BR /> <P> Create an index for the attribute. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 2 (0x00000002) </P> <BR /> </TD> <TD> <BR /> <P> Create an index for the attribute in each container. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 4 (0x00000004) </P> <BR /> </TD> <TD> <BR /> <P> Add this attribute to the Ambiguous Name Resolution (ANR) set. This is used to assist in finding an object when only partial information is given. For example, if the LDAP filter is (ANR=JEFF), the search will find each object where the first name, last name, email address, or other ANR attribute is equal to JEFF. Bit 0 must be set for this index take affect. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 8 (0x00000008) </P> <BR /> </TD> <TD> <BR /> <P> Preserve this attribute in the tombstone object for deleted objects. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 16 (0x00000010) </P> <BR /> </TD> <TD> <BR /> <P> Copy the value for this attribute when the object is copied. </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 32 (0x00000020) </P> <BR /> </TD> <TD> <BR /> <P> Supported beginning with Windows Server&nbsp;2003. Create a tuple index for the attribute. This will improve searches where the wildcard appears at the front of the search string. For example, (sn=*mith). </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> 64(0x00000040) </P> <BR /> </TD> <TD> <BR /> <P> Supported beginning with ADAM. Creates an index to greatly help VLV performance on arbitrary attributes. </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <P> To remove the PRESERVE_ON_DELETE flag, we subtracted 8 from customer's value of 31, which gave us a value of 23 (INDEX | CONTAINER | ANR | COPY). </P> <BR /> <P> </P> <BR /> <P> Once we removed the PRESERVE_ON_DELETE flag, we created and deleted a test account to confirm our modifications changed the tombstone behavior of the <STRONG> userPrincipalName </STRONG> attribute. UPN was no longer maintained! </P> <BR /> <P> </P> <BR /> <P> Mystery solved!! I think we all deserve a Scooby Snack now! </P> <BR /> <P> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Nom nom nom! </P> <BR /> <P> </P> <BR /> <H2> Lessons learned </H2> <BR /> <P> </P> <BR /> <OL> <BR /> <LI> ADRB is a great feature for AD. It can even be useful for AD LDS if you aren't synchronizing with AD. If you are synchronizing with AD, then the benefits of ADRB are limited and in the end it can cause you more problems than it solves. </LI> <BR /> <LI> Manually modifying the schema can have unintended consequences. </LI> <BR /> <LI> PowerShell for AD LDS is not as easy as AD </LI> <BR /> <LI> AD Administrative Center is for AD and not AD LDS </LI> <BR /> <LI> LDP Rocks! </LI> <BR /> </OL> <BR /> <P> </P> <BR /> <P> This wraps up the "More than you really ever wanted to know about ADAMSync, ADRB &amp; searchFlags" Scooby Doo edition of AskDS. Now, go enjoy your Scooby Snacks! </P> <BR /> <P> <BR /> - Kim "That Meddling Kid" Nichols </P> <BR /> <P> </P> <BR /> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 02:46:02 GMT Ryan Ries 2019-04-05T02:46:02Z Intermittent Mail Sack: Must Remember to Write 2013 Edition <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Jan 07, 2013 </STRONG> <BR /> <P> Hi all, <A href="#" target="_blank"> Jonathan </A> here again with the latest edition of the Intermittent Mail Sack. We've had some great questions over the last few weeks so I've got a lot of material to cover. This sack, we answer questions on: </P> <BR /> <UL> <BR /> <LI> <A href="" target="_blank"> Issues upgrading DFSR hub servers to Windows Server 2012 </A> </LI> <BR /> <LI> <A href="" target="_blank"> AD FS Sign-out behavior </A> </LI> <BR /> <LI> <A href="" target="_blank"> Dynamic Access Control and DFSR </A> </LI> <BR /> <LI> <A href="" target="_blank"> Machine account password resets and Macs </A> </LI> <BR /> <LI> <A href="" target="_blank"> Windows 7 cached logons </A> </LI> <BR /> <LI> <A href="" target="_blank"> DES encryption in Kerberos </A> </LI> <BR /> <LI> <A href="" target="_blank"> Certificate renewal with CEP/CES </A> </LI> <BR /> </UL> <BR /> <P> Before we get started, however, I wanted to share information about a new service available to Premier customers through <A href="#" target="_blank"> Microsoft Services Premier Support </A> . Many Premier customers will be familiar with the Risk Assessment Program (RAP). Premier Support is now rolling out an online offering called the <A href="#" target="_blank"> RAP as a Service (or <STRONG> RaaS </STRONG> for short) </A> . Our colleagues over on the Premier Field Engineering (PFE) blog have just posted a <A href="#" target="_blank"> description of the new offering </A> , and I encourage you to check it out. I've been working on the Active Directory RaaS offering since the early beta, and we've gotten really good feedback. Unfortunately, the offering is not yet available to non-Premier customers; look at RaaS as yet one more benefit to a Premier Support contract. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> Now on to the Mail Sack! </P> <BR /> <H2> <A> </A> Question </H2> <BR /> <P> I'm considering upgrading my DFSR hub servers to Server 2012. Is there anything I should know before I hit the easy button and do an upgrade? </P> <BR /> <H2> Answer </H2> <BR /> <P> The most important thing to note is that Microsoft strongly discourages mixing Windows Server 2012 and legacy operating system DFSR. You just mentioned upgrading your hub servers, and make no mention of any branch servers. If you're going to upgrade your DFSR servers then you should upgrade all of them. </P> <BR /> <P> Check out Ned's post over on the FileCab blog: <A href="#" target="_blank"> DFS Replication Improvements in Windows Server </A> . Specifically, review the section that discusses Dynamic Access Control Support. </P> <BR /> <P> Also, there is a minor issue that has been found that we are still tracking. When you upgrade from Windows Server 2008 R2 to Windows Server 2012 the DFS Management snap-in stops working. The workaround is to just uninstall and then reinstall the DFS Management tools: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> You can also do this with PowerShell: </P> <BR /> <CODE> Uninstall-WindowsFeature -name RSAT-DFS-Mgmt-Con <BR /> Install-WindowsFeature -name RSAT-DFS-Mgmt-Con </CODE> <BR /> <P> </P> <BR /> <H2> <A> </A> Question </H2> <BR /> <P> From our SharePoint site, when users click on log-off then they get sent to this page: https://your_sts_server/adfs/ls/?wa=wsignout1.0. </P> <BR /> <P> We configured the FedAuth cookie to be session based after we did this: </P> <BR /> <CODE> $sts = Get-SPSecurityTokenServiceConfig </CODE> <BR /> <CODE> $sts.UseSessionCookies = $true </CODE> <BR /> <CODE> $sts.Update() </CODE> <BR /> <P> </P> <BR /> <P> The problem is, unless the user closes all their browsers then when they go to the log-in page the browser remembers their credentials. This is not acceptable for some PC's are shared by people. Also, closing all browsers is not acceptable as they run multiple web applications. </P> <BR /> <H2> Answer </H2> <BR /> <P> (Courtesy of <A href="#" target="_blank"> Adam Conkle </A> ) </P> <BR /> <P> Great question! I hope the following details help you in your deployment: </P> <BR /> <P> Moving from a persistent cookie to a session cookie with SharePoint 2010 was the right move in this scenario in order to guarantee that closing the browser window would terminate the session with SharePoint 2010. </P> <BR /> <P> When you sign out via SharePoint 2010 and are redirected to the STS URL containing the query string: wa=wsignout1.0, this is what we call a WS-Federation sign-out request. This call is sufficient for signing out of the STS as well as all relying parties signed into during the session. </P> <BR /> <P> However, what you are experiencing is expected behavior for how Integrated Windows Authentication (IWA) works with web browsers. If your web browser client experienced either a no-prompt sign-in (using Kerberos authentication for the currently signed in user), or NTLM, prompted sign-in (provided credentials in a Windows Authentication "401" credential prompt), then the browser will remember the Windows credentials for that host for the duration of the browser session. </P> <BR /> <P> If you were to collect a HTTP headers trace (Fiddler, HTTPWatch, etc.) of the current scenario, you will see that the wa=wsignout1.0 request is actually causing AD FS and SharePoint 2010 (and any other RPs involved) to clean up their session cookies (MSISAuth and FedAuth) as expected. The session is technically ending the way it should during sign-out. However, if the client keeps the current browser session open, browsing back to the SharePoint site will cause a new WS-Federation sign-in request to be sent to AD FS (wa=wsignin1.0). When the sign-in request is sent to AD FS, AD FS will attempt to collect credentials with a HTTP 401, but, this time, the browser has a set of Windows credentials ready to provide to that host. </P> <BR /> <P> The browser provides those Windows credentials without a prompt shown to the user, and the user is signed back into AD FS, and, thus, is signed back into SharePoint 2010. To the naked eye, it appears that sign-out is not working properly, while, in reality, the user is signing out and then signing back in again. </P> <BR /> <P> To conclude, this is by-design behavior for web browser clients. There are two workarounds available: </P> <BR /> <H3> Workaround 1 </H3> <BR /> <P> Switch to forms-based authentication (FBA) for the AD FS Federation Service. The following article details this quick and easy process: AD FS 2.0: How to Change the Local Authentication Type </P> <BR /> <H3> Workaround 2 </H3> <BR /> <P> Instruct your user base to always close their web browser when they have finished their session </P> <BR /> <H2> <A> </A> Question </H2> <BR /> <P> Are the attributes for files and folders used by Dynamic Access Control are replicated with the object? That is, using DFSR, if I replicate the file to another server which uses the same policy will the file have the same effective permissions on it? </P> <BR /> <H2> Answer </H2> <BR /> <P> (Courtesy of <A href="#" target="_blank"> Mike Stephens </A> ) </P> <BR /> <P> Let me clarify some aspects of your question as I answer each part </P> <BR /> <P> When enabling <A href="#" target="_blank"> Dynamic Access Control </A> on files and folders there are multiple aspects to consider that are stored on the files and folders. </P> <BR /> <H3> Resource Properties </H3> <BR /> <P> Resource Properties are defined in AD and used as a template to stamp additional metadata on a file or folder that can be used during an authorization decision. That information is stored in an alternate data stream on the file or folder. This would replicate with the file, the same as the security descriptor. </P> <BR /> <H3> Security Descriptor </H3> <BR /> <P> The security descriptor replicates with the file or folder. Therefore, any conditional expression would replicate in the security descriptor. </P> <BR /> <P> All of this occurs outside of Dynamic Access Control -- it is a result of replicating the file throughout the topology, for example, if using DFSR. Central Access Policy has nothing to do with these results. </P> <BR /> <H3> Central Access Policy </H3> <BR /> <P> Central Access Policy is a way to distribute permissions without writing them directly to the DACL of a security descriptor. So, when a Central Access Policy is deployed to a server, the administrator must then link the policy to a folder on the file system. This linking is accomplished by inserting a special ACE in the auditing portion of the security descriptor informs Windows that the file/folder is protected by a Central Access Policy. The permissions in the Central Access Policy are then combined with Share and NTFS permissions to create an effective permission. </P> <BR /> <P> If the a file/folder is replicated to a server that does not have the Central Access Policy deployed to it then the Central Access Policy is not valid on that server. The permissions would not apply. </P> <BR /> <H2> <A> </A> Question </H2> <BR /> <P> I read the post located <A href="#" target="_blank"> here </A> regarding the machine account password change in Active Directory. </P> <BR /> <P> Based on what I read, if I understand this correctly, the machine password change is generated by the client machine and not AD. I have been told, (according to this post, inaccurately) that AD requires this password reset or the machine will be dropped from the domain. </P> <BR /> <P> I am a Macintosh systems administrator, and as you probably know, this issue does indeed occur on Mac systems. </P> <BR /> <P> I have reset the password reset interval to be various durations from fourteen days which is the default, to one day. </P> <BR /> <P> I have found that if I disjoin and rejoin the machine to the domain it will generate a new password and work just fine for 30 days. At that time, it will be dropped from the domain and have to be rejoined. This is not 100% of the time, however it is often enough to be a problem for us as we are a higher education institution which in addition to our many PCs, also utilizes a substantial number of Macs. Additionally, we have a script which runs every 60 days to delete machine accounts from AD to keep it clean, so if the machine has been turned off for more than 60 days, the account no longer exists. </P> <BR /> <P> I know your forte is AD/Microsoft support, however I was hoping that you might be able to offer some input as to why this might fail on the Macs and if there is any solution which we could implement. </P> <BR /> <P> Other Mac admins have found workarounds like eliminating the need for the pw reset or exempting the macs from the script, but our security team does not want to do this. </P> <BR /> <H2> Answer </H2> <BR /> <P> (Courtesy of <A href="#" target="_blank"> Mike Stephens </A> ) </P> <BR /> <P> Windows has a security policy feature named <STRONG> Domain member: Disable machine account password change </STRONG> , which determines whether the domain member periodically changes its computer account password. Typically, a mac, linux, or unix operating system uses some version of Samba to accomplish domain interoperability. I'm not familiar with these on the mac; however, in linux, you would use the command </P> <BR /> <CODE> Net ads changetrustpw </CODE> <BR /> <P> </P> <BR /> <P> By default, Windows machines initiate a computer password change every 30 days. You could schedule this command to run every 30 days once it completes successfully. Beyond that, basically we can only tell you how to disable the domain controller from accepting computer password changes, which we do not encourage. </P> <BR /> <H2> <A> </A> Question </H2> <BR /> <P> I recently installed a new server running Windows 2008 R2 (as a DC) and a handful of client computers running Windows 7 Pro. On a client, which is shared by two users (userA and userB), I see the following event on the Event Viewer after userA logged on. </P> <BR /> <CODE> Event ID: 45058 </CODE> <BR /> <CODE> Source: LsaSrv </CODE> <BR /> <CODE> Level: Information </CODE> <BR /> <CODE> Description: </CODE> <BR /> <CODE> A logon cache entry for user userB@domain.local was the oldest entry and was removed. The timestamp of this entry was 12/14/2012 08:49:02. </CODE> <BR /> <P> </P> <BR /> <P> All is working fine. Both userA and userB are able to log on on the domain by using this computer. Do you think I have to worry about this message or can I just safely ignore it? </P> <BR /> <P> Fyi, our users never work offline, only online. </P> <BR /> <H2> Answer </H2> <BR /> <P> By default, a Windows operating system will cache 10 domain user credentials locally. When the maximum number of credentials is cached and a new domain user logs onto the system, the oldest credential is purged from its slot in order to store the newest credential. This LsaSrv informational event simply records when this activity takes place. Once the cached credential is removed, it does not imply the account cannot be authenticated by a domain controller and cached again. </P> <BR /> <P> The number of "slots" available to store credentials is controlled by: </P> <BR /> <P> Registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon <BR /> Setting Name: CachedLogonsCount <BR /> Data Type: REG_SZ <BR /> Value: Default value = 10 decimal, max value = 50 decimal, minimum value = 1 </P> <BR /> <P> Cached credentials can also be managed with group policy by configuring: </P> <BR /> <P> Group Policy Setting path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options. <BR /> Group Policy Setting: Interactive logon: Number of previous logons to cache (in case domain controller is not available) </P> <BR /> <P> The workstation the user must have physical connectivity with the domain and the user must authenticate with a domain controller to cache their credentials again once they have been purged from the system. </P> <BR /> <P> I suspect that your CachedLogonsCount value has been set to 1 on these clients, meaning that that the workstation can only cache one user credential at a time. </P> <BR /> <H2> <A> </A> Question </H2> <BR /> <P> In Windows 7 and Server 2008 Kerberos DES encryption is disabled by default. </P> <BR /> <P> At what point will support for DES Kerberos encryption be removed? Does this happen in Windows 8 or Windows Server 2012, or will it happen in a future version of Windows? </P> <BR /> <H2> Answer </H2> <BR /> <P> DES is still available as an option on Windows 8 and Windows Server 2012, though it is disabled by default. It is too early to discuss the availability of DES in future versions of Windows right now. </P> <BR /> <P> There was an <A href="#" target="_blank"> Advisory Memorandum </A> published in 2005 by the Committee on National Security Systems (CNSS) where DES and all DES-based systems (3DES, DES-X) would be retired for all US Government uses by 2015. That memorandum, however, is not necessarily a binding document. It is expected that 3DES/DES-X will continue to be used in the private sector for the foreseeable future. </P> <BR /> <P> I'm afraid that we can't completely eliminate DES right now. All we can do is push it to the back burner in favor of newer and better algorithms like AES. </P> <BR /> <H2> <A> </A> Question </H2> <BR /> <P> I have two Issuing certification authorities in our corporate network. All our approved certificate templates are published on both issuing CAs. We would like to enable certificate renewals from Internet with our Internet-facing CEP/CES configured for certificate authentication in Certificate Renewal Mode Only. What we understand from the <A href="#" target="_blank"> whitepaper </A> is that it's not going to work when the CA that issues the certificate must be the same CA used for certificate renewal. </P> <BR /> <H2> Answer </H2> <BR /> <P> First, I need to correct an assumption made based on your reading of the whitepaper. There is no requirement that, when a certificate is renewed, the renewal request be sent to the same CA as that that issued the original certificate. This means that your clients can go to either enrollment server to renew the certificate. Here is the process for renewal: </P> <BR /> <OL> <BR /> <LI> When the user attempts to renew their certificate via the MMC, Windows sends a request to the Certificate Enrollment Policy (CEP) server URL configured on the workstation. This request includes the template name of the certificate to be renewed. </LI> <BR /> <LI> The CEP server queries Active Directory for a list of CAs capable of issuing certificates based on that template. This list will include the Certificate Enrollment Web Service (CES) URL associated with that CA. Each CA in your environment should have one or more instances of CES associated with it. </LI> <BR /> <LI> The list of CES URLs is returned to the client. This list is unordered. </LI> <BR /> <LI> The client randomly selects a URL from the list returned by the CEP server. This random selection ensures that renewal requests are spread across all returned CAs. In your case, if both CAs are configured to support the same template, then if the certificate is renewed 100 times, either with or without the same key, then that should result in a nearly 50/50 distribution between the two CAs. </LI> <BR /> </OL> <BR /> <P> The behavior is slightly different if one of your CAs goes down for some reason. In that case, should clients encounter an error when trying to renew a certificate against one of the CES URIs then the client will failover and use the next CES URI in the list. By having multiple CAs and CES servers, you gain high availability for certificate renewal. </P> <BR /> <H2> Other Stuff </H2> <BR /> <P> I'm very sad that I didn't see this until after the holidays. It definitely would have been on my Christmas list. A little pricey, but totally geek-tastic. </P> <BR /> <P> <IFRAME frameborder="0" height="360" src="" width="640"> </IFRAME> </P> <BR /> <P> This was also on my list, this year. Go Science! </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Please do keep those questions coming. We have another post in the hopper going up later in the week, and soon I hope to have some Windows Server 2012 goodness to share with you. From all of us on the Directory Services team, have a happy and prosperous New Year! </P> <BR /> <P> Jonathan "13 <SUP> th </SUP> baktun" Stephens </P> <BR /> <P> </P> <BR /> <P> </P> </BODY></HTML> Fri, 05 Apr 2019 02:44:35 GMT Ryan Ries 2019-04-05T02:44:35Z Revenge of Y2K and Other News <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Nov 27, 2012 </STRONG> <BR /> Hello sports fans! <BR /> <BR /> So this has been a bit of a hectic time for us, as I'm sure you can imagine. Here's just some of the things that have been going on around here. <BR /> <BR /> Last week, thanks to a failure on the time servers at USNO.NAVY.MIL, many customers experienced a time rollback to CY 2000 on their Active Directory domain controllers. Our team worked closely with the folks over at Premier Field Engineering to explain the problem, document resolutions for the various issues that might arise, and describe how to inoculate your DCs against a similar problem in the future. If you were affected by this problem then you need to read this post. If you weren't affected, and want to know why, then you need to read this post. Basically, we think you need to read this post.'s the link to the <A href="#" target="_blank"> AskPFEPlat blog </A> . <BR /> <BR /> In other news, Ned Pyle has successfully infiltrated the Product Group and has started blogging on <A href="#" target="_blank"> The Storage Team </A> blog. His first post is up, and I'm sure there will be many more to follow. If you've missed Ned's rare blend of technical savvy and sausage-like prose, and you have an interest in Microsoft's DFSR and other storage technologies, then go <A href="#" target="_blank"> check him out </A> . <BR /> <BR />'ve probably noticed the lack of activity here on the AskDS blog. Truthfully, that's been the result of a confluence of events -- Ned's departure, the Holiday season here in the US, and the intense interest in Windows 8 and Windows Server 2012 (and subsequent support calls). Never fear, however! I'm pleased to say that your questions to the blog have been coming in quite steadily, so this week I'll be posting an omnibus edition of the Mail Sack. We also have one or two more posts that will go up between now and the end of the year, so there's that to look forward to. Starting with the new calendar year, we'll get back to a semi-regular posting schedule as we get settled and build our queue of posts back up. <BR /> <BR /> In the mean time, if you have questions about anything you see on the blog, don't hesitate to <A href="#" target="_blank"> contact us </A> . <BR /> <BR /> <P> Jonathan "time to make the donuts" Stephens </P> </BODY></HTML> Fri, 05 Apr 2019 02:44:03 GMT Ryan Ries 2019-04-05T02:44:03Z ADAMSync 101 <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Nov 12, 2012 </STRONG> <BR /> <P> Hi Everyone, <A href="#" target="_blank"> Kim Nichols </A> here again, and this time I have an introduction to ADAMSync. I take a lot of cases on ADAM and AD LDS and have seen a number of problems arise from less than optimally configured ADAMSync XML files. There are many sources of information on ADAM/AD LDS and ADAMSync (I'll include links at the end), but I still receive lots of questions and cases on configuring ADAM/AD LDS for ADAMSync. </P> <BR /> <P> We'll start at the beginning and talk about what ADAM/AD LDS is, what ADAMSync is and then finally how you can get AD LDS and ADAMSync working in your environment. </P> <BR /> <H2> What is ADAM/AD LDS? </H2> <BR /> <P> ADAM (Active Directory Application Mode) is the 2003 name for AD LDS (Active Directory Lightweight Directory Services). AD LDS is, as the name describes, a lightweight version of Active Directory. It gives you the capabilities of a multi-master LDAP directory that supports replication without some of the extraneous features of an Active Directory domain controller (domains and forests, Kerberos, trusts, etc.). AD LDS is used in situations where you need an LDAP directory but don't want the administration overhead of AD. Usually it's used with web applications or SQL databases for authentication. Its schema can also be fully customized without impacting the AD schema. </P> <BR /> <P> AD LDS uses the concept of instances, similar to that of instances in SQL. What this means is one AD LDS server can run multiple AD LDS instances (databases). This is another differentiator from Active Directory: a domain controller can only be a domain controller for one domain. In AD LDS, each instance runs on a different set of ports. The default instance of AD LDS listens on 389 (similar to AD). </P> <BR /> <P> Here's some more information on AD LDS if you're new to it: </P> <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> AD LDS Installed Help </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Active Directory Lightweight Directory Services (AD LDS) Getting Started Step-by-Step Guide </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Active Directory Lightweight Directory Services Overview </A> </LI> <BR /> <LI> <A href="#" target="_blank"> AD LDS Backup and Restore Step-by-Step Guide </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Active Directory Lightweight Directory Services Operations Guide </A> </LI> <BR /> </UL> <BR /> <H2> What is ADAMSync? </H2> <BR /> <P> In many scenarios, you may want to store user data in AD LDS that you can't or don't want to store in AD. Your application will point to the AD LDS instance for this data, but you probably don't want to manually create all of these users in AD LDS when they already exist in AD. If you have Forefront Identity Manager (FIM), you can use it to synchronize the users from AD into AD LDS and then manually populate the AD LDS specific attributes through LDP, ADSIEdit, or a custom or 3rd party application. If you don't have FIM, however, you can use ADAMSync to synchronize data from your Active Directory to AD LDS. </P> <BR /> <P> It is important to remember that ADAMSync DOES NOT synchronize user passwords! If you want the AD LDS user account to use the same password as the AD user, then userproxy transformation is what you need. (That's a topic for another day, though. I'll include links at the end for userproxy.) </P> <BR /> <P> ADAMSync uses an XML file that defines which data will synchronize from AD to AD LDS. The XML file includes the AD partition from which to synchronize, the object types (classes or categories), and attributes to synchronize. This file is loaded into the AD LDS database and used during ADAMSync synchronization. Every time you make changes to the XML file, you must reload the XML file into the database. </P> <BR /> <P> In order for ADAMSync to work: </P> <BR /> <OL> <BR /> <LI> The <STRONG> MS-AdamSyncMetadata.LDF </STRONG> file must be imported into the schema of the AD LDS instance prior to attempting to install the XML file. This LDF creates the classes and attributes for storing the ADAMSync.xml file. </LI> <BR /> <LI> The schema of the AD LDS instance must already contain all of the object classes and attributes that you will be syncing from AD to AD LDS. In other words, you can't sync a user object from AD to AD LDS unless the AD LDS schema contains the User class and all of the attributes that you specify in the ADAMSync XML (we'll talk more about this next). There is a <A href="#" target="_blank"> blog post </A> on using ADSchemaAnalyzer to compare the AD schema to the AD LDS schema and export the differences to an LDF file that can be imported into AD LDS. </LI> <BR /> <LI> <BR /> <DIV> Unless you plan on modifying the schema of the AD LDS instance, your instance should be named DC=&lt;partition name&gt;, DC=&lt;com or local or whatever&gt; and not CN=&lt;partition name&gt;. Unfortunately, the example in the AD LDS setup wizard uses CN= for the partition name.&nbsp; If you are going to be using ADAMSync, you should disregard that example and use DC= instead.&nbsp; The reason behind this change is that the default schema does not allow an organizationalUnit (OU) object to have a parent object of the Container (CN) class. Since you will be synchronizing OUs from AD to AD LDS and they will need to be child objects of your application partition head, you will run into problems if your application partition is named CN=. </DIV> <BR /> <P> <BR /> <IMG src="" /> <BR /> <BR /> Obviously, this limitation is something you can change in the AD LDS schema, but simply naming your partition with <STRONG> DC= </STRONG> name component will eliminate the need to make such a change. In addition, you won't have to remember that you made a change to the schema in the future. </P> <BR /> </LI> <BR /> </OL> <BR /> <P> The best advice I can give regarding ADAMSync is to keep it as simple as possible to start off with. The goal should be to get a basic XML file that you know will work, gradually add attributes to it, and troubleshoot issues one at a time. If you try to do too much (too wide of object filter or too many attributes) in the XML from the beginning, you will likely run into multiple issues and not know where to begin in troubleshooting. </P> <BR /> <P> <A href="#" target="_blank"> <STRONG> KEEP IT SIMPLE!!! </STRONG> </A> </P> <BR /> <H2> MS-AdamSyncConf.xml </H2> <BR /> <P> Let's take a look at the default XML file that Microsoft provides and go through some recommendations to make it more efficient and less prone to issues. The file is named MS-AdamSyncConf.XML and is typically located in the %windir%\ADAM directory. </P> <BR /> <P> &lt;?xml version="1.0"?&gt; <BR /> &lt;doc&gt; <BR /> &lt;configuration&gt; <BR /> &lt;description&gt;sample Adamsync configuration file&lt;/description&gt; <BR /> &lt;security-mode&gt;object&lt;/security-mode&gt; <BR /> &lt;source-ad-name&gt;;/source-ad-name&gt; &lt;------ 1 <BR /> &lt;source-ad-partition&gt;dc=fabrikam,dc=com&lt;/source-ad-partition&gt; &lt;------ 2 <BR /> &lt;source-ad-account&gt;&lt;/source-ad-account&gt; &lt;------ 3 <BR /> &lt;account-domain&gt;&lt;/account-domain&gt; &lt;------ 4 <BR /> &lt;target-dn&gt;dc=fabrikam,dc=com&lt;/target-dn&gt; &lt;------ 5 <BR /> &lt;query&gt; <BR /> &lt;base-dn&gt;dc=fabrikam,dc=com&lt;/base-dn&gt; &lt;------ 6 <BR /> &lt;object-filter&gt;(objectClass=*)&lt;/object-filter&gt; &lt;------ 7 <BR /> &lt;attributes&gt; &lt;------ 8 <BR /> &lt;include&gt;&lt;/include&gt; <BR /> &lt;exclude&gt;extensionName&lt;/exclude&gt; <BR /> &lt;exclude&gt;displayNamePrintable&lt;/exclude&gt; <BR /> &lt;exclude&gt;flags&lt;/exclude <BR /> &lt;exclude&gt;isPrivelegeHolder&lt;/exclude&gt; <BR /> &lt;exclude&gt;msCom-UserLink&lt;/exclude&gt; <BR /> &lt;exclude&gt;msCom-PartitionSetLink&lt;/exclude&gt; <BR /> &lt;exclude&gt;reports&lt;/exclude&gt; <BR /> &lt;exclude&gt;serviceprincipalname&lt;/exclude&gt; <BR /> &lt;exclude&gt;accountExpires&lt;/exclude&gt; <BR /> &lt;exclude&gt;adminCount&lt;/exclude&gt; <BR /> &lt;exclude&gt;primarygroupid&lt;/exclude&gt; <BR /> &lt;exclude&gt;userAccountControl&lt;/exclude&gt; <BR /> &lt;exclude&gt;codePage&lt;/exclude&gt; <BR /> &lt;exclude&gt;countryCode&lt;/exclude&gt; <BR /> &lt;exclude&gt;logonhours&lt;/exclude&gt; <BR /> &lt;exclude&gt;lockoutTime&lt;/exclude&gt; <BR /> &lt;/attributes&gt; <BR /> &lt;/query&gt; <BR /> &lt;schedule&gt; <BR /> &lt;aging&gt; <BR /> &lt;frequency&gt;0&lt;/frequency&gt; <BR /> &lt;num-objects&gt;0&lt;/num-objects&gt; <BR /> &lt;/aging&gt; <BR /> &lt;schtasks-cmd&gt;&lt;/schtasks-cmd&gt; <BR /> &lt;/schedule&gt; &lt;------ 9 <BR /> &lt;/configuration&gt; <BR /> &lt;synchronizer-state&gt; <BR /> &lt;dirsync-cookie&gt;&lt;/dirsync-cookie&gt; <BR /> &lt;status&gt;&lt;/status&gt; <BR /> &lt;authoritative-adam-instance&gt;&lt;/authoritative-adam-instance&gt; <BR /> &lt;configuration-file-guid&gt;&lt;/configuration-file-guid&gt; <BR /> &lt;last-sync-attempt-time&gt;&lt;/last-sync-attempt-time&gt; <BR /> &lt;last-sync-success-time&gt;&lt;/last-sync-success-time&gt; <BR /> &lt;last-sync-error-time&gt;&lt;/last-sync-error-time&gt; <BR /> &lt;last-sync-error-string&gt;&lt;/last-sync-error-string&gt; <BR /> &lt;consecutive-sync-failures&gt;&lt;/consecutive-sync-failures&gt; <BR /> &lt;user-credentials&gt;&lt;/user-credentials&gt; <BR /> &lt;runs-since-last-object-update&gt;&lt;/runs-since-last-object-update&gt; <BR /> &lt;runs-since-last-full-sync&gt;&lt;/runs-since-last-full-sync&gt; <BR /> &lt;/synchronizer-state&gt; <BR /> &lt;/doc&gt; </P> <BR /> <P> Let's go through the default XML file by number and talk about what each section does, why the defaults are what they are, and what I typically recommend when working with customers. </P> <BR /> <OL> <BR /> <LI> <BR /> <DIV> &lt;source-ad-name&gt;;/source-ad-name&gt; </DIV> <BR /> <P> Replace with the FQDN of the domain/forest that will be your synchronization source </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> &lt;source-ad-partition&gt;dc=fabrikam,dc=com&lt;/source-ad-partition&gt; </DIV> <BR /> <P> Replace dc=fabrikam,dc=com with the DN of the AD partition that will be the source for the synchronization </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> &lt;source-ad-account&gt;&lt;/source-ad-account&gt; </DIV> <BR /> <P> Contains the account that will be used to authenticate to the source forest/domain. If left empty, the credentials of the logged on user will be used </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> &lt;account-domain&gt;&lt;/account-domain&gt; </DIV> <BR /> <P> Contains the domain name to use for authentication to the source domain/forest. This element combined with &lt;source-ad-account&gt; make up the domain\username that will be used to authenticate to the source domain/forest. If left empty, the domain of the logged on user will be used. </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> &lt;target-dn&gt;dc=fabrikam,dc=com&lt;/target-dn&gt; </DIV> <BR /> <P> Replace dc=fabrikam,dc=com with the DN of the AD LDS partition you will be synchronizing to. <BR /> <BR /> <STRONG> NOTE: </STRONG> In 2003 ADAM, you were able to specify a sub-ou or container of the of the ADAM partition, for instance OU=accounts,dc=fabrikam,dc=com. This is not possible in 2008+ AD LDS. You must specify the head of the partition, dc=fabrikam,dc=com. This is publicly documented <A href="#" target="_blank"> here </A> . </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> &lt;base-dn&gt;dc=fabrikam,dc=com&lt;/base-dn&gt; </DIV> <BR /> <P> Replace dc=fabrikam,dc=com with the base DN of the container in AD that you want to synchronize objects from. <BR /> <BR /> <STRONG> NOTE: </STRONG> You can specify multiple base DNs in the XML file, but it is important to note that due to the way the dirsync engine works the entire directory will still be scanned during synchronization. This can lead to unexpectedly long synchronization times and output in the adamsync.log file that is confusing. The short of this this is that even though you are limiting where to synchronize objects from, it doesn't reduce your synchronization time and you will see entries in the adamsync.log file that indicate objects being processed but not written. This can make it appear as though ADAMSync is not working correctly if your directory is large but you are syncing is a small percentage of the directory. Also, the log will grow and grow, but it may take a long time for objects to begin to appear in AD LDS. This is because the entire directory is being enumerated, but only a portion is being synchronized. </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> &lt;object-filter&gt;(objectClass=*)&lt;/object-filter&gt; </DIV> <BR /> <P> The object filter determines which objects will be synchronized from AD to AD LDS. While objectClass=* will get you everything, do you really want or need EVERYTHING? Consider the amount of data you will be syncing and the security implications of having everything duplicated in AD LDS. If you only care about user objects, then don't sync computers and groups. <BR /> <BR /> The filter that I generally recommend as a starting point is: <BR /> <BR /> (&amp;#124;(objectCategory=Person)(objectCategory=OrganizationalUnit)) <BR /> <BR /> Rather than <EM> objectClass=User </EM> , I recommend <EM> objectCategory=Person </EM> . But, why, you ask? I'll tell you :)</img> If you've ever looked that the class of a computer object, you'll notice that it contains an objectClass of user. <BR /> <BR /> <IMG src="" /> <BR /> <BR /> What this means to ADAMSync is that if I specify an object filter of <EM> objectClass=user </EM> , ADAMSync will synchronize <STRONG> users </STRONG> and <STRONG> computers </STRONG> (and <STRONG> contact </STRONG> objects and anything else that inherits from the User class). However, if I use <EM> objectCategory=Person </EM> , I only get actual user objects. Pretty neat, eh? <BR /> <BR /> So, what does this <STRONG> &amp;#124; </STRONG> mean and why include <EM> objectCategory=OrganizationalUnit </EM> ? The literal <STRONG> &amp;#124; </STRONG> is the XML representation of the <STRONG> | </STRONG> (pipe) character which represents a logical OR. True, I've seen customers just use the <STRONG> | </STRONG> character in the XML file and not have issues, but I always use the XML rather than the <STRONG> | </STRONG> just to be certain that it gets translated properly when loaded into the AD LDS instance. If you need to use an AND rather than an OR, the XML for <STRONG> &amp; </STRONG> is <STRONG> &amp;amp; </STRONG> . <BR /> <BR /> You need <EM> objectCategory=OrganizationalUnit </EM> so that objects that are moved within AD get synchronized properly to AD LDS. If you don't specify this, the OUs that contain objects within scope of the object filter will be created on the initial creation of the object in AD LDS. But, if that object is ever MOVED in the source AD, ADAMSync won't be able to synchronize that object to the new location. Moving an object changes the full DN of the object. Since we aren't syncing the OUs the object just "disappears" from an ADAMSync perspective and never gets updated/moved. <BR /> <BR /> If you need groups to be synchronized as well you can add <EM> (objectclass=group) </EM> inside the outer parentheses and groups will also be synced. <BR /> <BR /> (&amp;#124;(objectCategory=Person)(objectCategory=OrganizationalUnit)(objectClass=Group)) <BR /> </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> &lt;attributes&gt; </DIV> <BR /> <P> The attributes section is where you define which attributes to synchronize for the object types defined in the &lt;object-filter&gt;. <BR /> <BR /> You can either use the &lt;include&gt;&lt;/include&gt; or &lt;exclude&gt;&lt;/exclude&gt; tabs, but you cannot use both. <BR /> <BR /> The default XML file provided by Microsoft takes the high ground and uses the &lt;exclude&gt;&lt;/exclude&gt; tags which really means include all attributes except the ones that are explicitly defined within the &lt;exclude&gt;&lt;/exclude&gt; element. While this approach guarantees that you don't miss anything important, it can also lead to a lot of headaches in troubleshooting. <BR /> <BR /> If you've ever looked at an AD user account in ADSIEdit (especially in an environment with Exchange), you'll notice there are hundreds of attributes defined. Keeping to my earlier advice of "keep it simple", every attribute you sync adds to the complexity. <BR /> <BR /> When you use the &lt;exclude&gt;&lt;/exclude&gt; tags you don't know what you are syncing; you only know what you are not syncing. If your application isn't going to use the attribute then there is no reason to copy that data to AD LDS. Additionally, there are some attributes and classes that just won't sync due to how the dirsync engine works. I'll include the list as I know it at the end of the article. Every environment is different in terms of which schema updates have been made and which attributes are being used. Also, as I mentioned earlier, if your AD LDS schema does not contain the object classes and attributes that you have defined in your ADAMSync XML file you're your synchronization will die in a big blazing ball of flame. <BR /> <BR /> <IMG src="" /> <BR /> <STRONG> Whoosh!! <BR /> </STRONG> <BR /> A typical attributes section to start out with is something like this: <BR /> <BR /> </P> <BR /> <P> &lt;include&gt;objectSID&lt;/include&gt; &lt;----- only needed for userproxy <BR /> &lt;include&gt;userPrincipalName&lt;/include&gt; &lt;----- must be unique in AD LDS instance <BR /> &lt;include&gt;displayName&lt;/include&gt; <BR /> &lt;include&gt;givenName&lt;/include&gt; <BR /> &lt;include&gt;sn&lt;/include&gt; <BR /> &lt;include&gt;physicalDeliveryOfficeName&lt;/include&gt; <BR /> &lt;include&gt;telephoneNumber&lt;/include&gt; <BR /> &lt;include&gt;mail&lt;/include&gt; <BR /> &lt;include&gt;title&lt;/include&gt; <BR /> &lt;include&gt;department&lt;/include&gt; <BR /> &lt;include&gt;manager&lt;/include&gt; <BR /> &lt;include&gt;mobile&lt;/include&gt; <BR /> &lt;include&gt;ipPhone&lt;/include&gt; <BR /> &lt;exclude&gt;&lt;/exclude&gt; </P> <BR /> <P> Initially, you may even want to remove userPrincipalName, just to verify that you can get a sync to complete successfully. Synchronization issues caused by the userPrincipalName attribute are among the most common ADAMSync issues I see. Active Directory allows multiple accounts to have the same userPrincipalName, but <STRONG> ADAMSync will not sync an object if it has the same userPrincipalName of an object that already exists in the AD LDS database. </STRONG> <BR /> <BR /> If you want to be a superhero and find duplicate UPNs in your AD before you attempt ADAMSync, here's a nifty csvde command that will generate a comma-delimited file that you can run through Excel's "Highlight duplicates" formatting options (or a script if you are a SUPER-SUPERHERO) to find the duplicates. <BR /> <BR /> csvde -f upn.csv -s localhost:389 -p subtree -d "DC=fabrikam,DC=com" -r "(objectClass=user)" -l sAMAccountName,userPrincipalName <BR /> <BR /> Remember, you are targeting your AD with this command, so the localhost:389 implies that the command is being run on the DC. You'll need to replace "DC=fabrikam, DC=com" with your AD domain's DN. </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> &lt;/schedule&gt; </DIV> <BR /> <P> After &lt;/schedule&gt; is where you would insert the elements to do user proxy transformation. In the References section, I've included links that explain the purpose and configuration of userproxy. The short version is that you can use this section of code to create userproxy objects rather than AD LDS user class objects. Userproxy objects are a special class of user that links back to an Active Directory domain account to allow the AD LDS user to utilize the password of their corresponding user account in AD. It is NOT a way to logon on to AD from an external network. It is a way to allow an application that utilizes AD LDS as its LDAP directory to authenticate a user via the same password they have in AD. Communication between AD and AD LDS is required for this to work and the application that is requesting the authentication does not receive a Kerberos ticket for the user. <BR /> <BR /> Here is an example of what you would put after &lt;/schedule&gt; and before &lt;/configuration&gt; <BR /> <BR /> &lt;user-proxy&gt; <BR /> &lt;source-object-class&gt;user&lt;/source-object-class&gt; <BR /> &lt;target-object-class&gt;userProxyFull&lt;/target-object-class&gt; <BR /> &lt;/user-proxy&gt; </P> <BR /> </LI> <BR /> </OL> <BR /> <H2> Installing the XML file </H2> <BR /> <P> OK! That was fun, wasn't it? Now that we have an XML file, how do we use it? This is covered in a lot of different materials, but the short version is we have to install it into the AD LDS instance. To install the file, run the following command from the ADAM installation directory (%windir%\ADAM): </P> <BR /> <P> Adamsync /install localhost:389 CustomAdamsync.xml </P> <BR /> <P> The command above assumes you are running it on the AD LDS server, that the instance is running on port 389 and that the XML file is located in the path of the adamsync command. </P> <BR /> <P> What does this do exactly, you ask? The adamsync install command copies the XML file contents into the configurationFile attribute on the AD LDS application partition head. You can view the attribute by connecting to the application partition via LDP or through ADSIEdit. This is a handy thing to know. You can use this to verify for certain exactly what is configured in the instance. Often there are several versions of the XML file in the ADAM directory and it can be difficult to know which one is being used. Checking the configurationFile attribute will tell you exactly what is configured. It won't tell you which XML file was used, but at least you will know the configuration. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The implication of this is that anytime you update the XML file you must reinstall it using the <STRONG> adamsync /install </STRONG> command otherwise the version in the instance is not updated. I've made this mistake a number of times during troubleshooting! </P> <BR /> <H2> Synchronizing with AD </H2> <BR /> <P> Finally, we are ready to synchronize! Running the synchronization is the "easy" part assuming we've created a valid XML file, our AD LDS schema has all the necessary classes and attributes, and the source AD data is without issue (duplicate UPN is an example of a known issue). </P> <BR /> <P> From the ADAM directory (typically %windir%\ADAM), run the following command: </P> <BR /> <P> Adamsync /sync localhost:389 "DC=fabrikam,DC=com" /log adamsync.log </P> <BR /> <P> Again, we're assuming you are running the command on the AD LDS server and that the instance is running on port 389. The DN referenced in the command is the DN of your AD LDS application partition. /log is very important (you can name the log anything you want). You will need this log if there are any issues during the synchronization. The log will tell you which object failed and give you a cryptic "detailed" reason as to why. Below is an example of an error due to a duplicate UPN. This is one of the easier ones to understand. </P> <BR /> <P> ==================================================== <BR /> Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0 <BR /> Processing source entry &lt;guid=fe36238b9dd27a45b96304ea820c82d8&gt; <BR /> Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8. <BR /> <BR /> Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1: <BR /> 0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName) <BR /> <BR /> . Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1: <BR /> 0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName) <BR /> =============================================== </P> <BR /> <P> During the sync, if you are syncing from the Active Directory domain head rather than an OU or container, your objects should begin showing up in the AD LDS instance almost immediately. The objects don't synchronize in any order that makes sense to the human brain, so don't worry if objects are appearing in a random order. There is no progress bar or indication of how the sync is going other than fact that the log file is growing. When the sync completes you will be returned to the command prompt and your log file will stop growing. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <H2> Did it work? </H2> <BR /> <P> As you can see there is nothing on the command line nor are there any events in any Windows event log that indicate that the synchronization was successful. In this context, successful means completed without errors and all objects in scope, as defined in the XML file, were synchronized. The only way to determine if the synchronization was successful is to check the log file. This highlights the importance of generating the log. Additionally, it's a good idea to keep a reasonable number of past logs so if the sync starts failing at some point you can determine approximately when it started occurring. Management likes to know things like this. </P> <BR /> <P> Since you'll probably be automating the synchronization (easy to do with a scheduled task) and not running it manually, it's a good idea to set up a reminder to periodically check the logs for issues. If you've never looked at a log before, it can be a little intimidating if there are a lot of objects being synchronized. The important thing to know is that if the sync was successful, the bottom of the log will contain a section similar to the one below: </P> <BR /> <P> Updating the configuration file DirSync cookie with a new value. <BR /> <BR /> Beginning processing of deferred dn references. <BR /> Finished processing of deferred dn references. <BR /> <BR /> Finished (successful) synchronization run. <BR /> Number of entries processed via dirSync: 16 <BR /> Number of entries processed via ldap: 0 <BR /> Processing took 4 seconds (0, 0). <BR /> Number of object additions: 3 <BR /> Number of object modifications: 13 <BR /> Number of object deletions: 0 <BR /> Number of object renames: 2 <BR /> Number of references processed / dropped: 0, 0 <BR /> Maximum number of attributes seen on a single object: 9 <BR /> Maximum number of values retrieved via range syntax: 0 <BR /> <BR /> Beginning aging run. <BR /> Aging requested every 0 runs. We last aged 2 runs ago. <BR /> Saving Configuration File on DC=instance1,DC=local <BR /> Saved configuration file. </P> <BR /> <P> If your log just stops without a section similar to the one above, then the last entry will indicate an error similar to the one above for the duplicate UPN. </P> <BR /> <H2> Conclusion and other References </H2> <BR /> <P> That covers the basics of setting up ADAMSync! I hope this information makes the process more straight forward and gives you some tips for getting it to work the first time! The most important point I can make is to start very simple with the XML file and get something to work. You can always add more attributes to the file later, but if you start from broken it can be difficult to troubleshoot. Also, I highly recommend using &lt;include&gt; over &lt;exclude&gt; when specifying attributes to synchronize. This may be more work for your application team since they will have to know what their application requires, but it will make setting up the XML file and getting a successful synchronization much easier! </P> <BR /> <H3> ADAMSync excluded objects </H3> <BR /> <P> As I mentioned earlier, there are some attributes, classes and object types that ADAMSync will not synchronize. The items listed below are hard-coded not to sync. There is no way around this using ADAMSync. If you need any of these items to sync, then you will need to use LDIFDE exports, FIM, or some other method to synchronize them from AD to AD LDS. The scenarios where you would require any of these items are very limited and some of them are dealt with within ADAMSync by converting the attribute to a new attribute name (objectGUID to sourceObjectGUID). </P> <BR /> Attributes <BR /> <P> cn, currentValue, dBCSPwd, fSMORoleOwner, initialAuthIncoming, initialAuthOutgoing, isCriticalSystemObject, isDeleted, lastLogonTimeStamp, lmPwdHistory, msDS-ExecuteScriptPassword, ntPwdHistory, nTSecurityDescriptor, objectCategory, objectSid (except when being converted to proxy), parentGUID, priorValue, pwdLastSet, sAMAccountType, sIDHistory, supplementalCredentials, supplementalCredentials, systemFlags, trustAuthIncoming, trustAuthOutgoing, unicodePwd, whenChanged </P> <BR /> Classes <BR /> <P> crossRef, secret, trustedDomain, foreignSecurityPrincipal, rIDSet, rIDManager </P> <BR /> Other <BR /> <P> Naming Context heads, deleted objects, empty attributes, attributes we do not have permissions to read, objectGUIDs (gets transferred to sourceObjectGUID), objects with del-mangeled distinguished names (DEL:\) </P> <BR /> <H3> Additional Goodies </H3> <BR /> ADAMSync <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> Synchronize with Active Directory Domain Services </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Determine Applied Schema Extensions with AD DS/LDS Schema Analyzer </A> </LI> <BR /> <LI> <A href="#" target="_blank"> ADAMSync Configuration File XML Reference </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Active Directory Understanding Proxy Authentication in AD LDS </A> </LI> <BR /> <LI> <A href="#" target="_blank"> ADAMSync can also transform users in to proxy users </A> </LI> <BR /> </UL> <BR /> AD LDS Replication <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> AD LDS Replication Step-by-Step Guide </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Link-Pairs and Configuring Bridgeheads in ADAM-AD </A> </LI> <BR /> </UL> <BR /> Misc Blogs <BR /> <UL> <BR /> <LI> <A href="#" target="_blank"> AD LDS Schema Files </A> </LI> <BR /> <LI> <A href="#" target="_blank"> How to Decommission an AD LDS server and add Additional Servers </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Overview of authentication mechanisms in AD LDS </A> </LI> <BR /> <LI> <A href="#" target="_blank"> One stop Audit shop for ADAM and AD LDS </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Service Connection Points (SCPs) and ADAM-AD LDS </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Directory Services API Element Differences (Windows) </A> </LI> <BR /> <LI> <A href="#" target="_blank"> Windows 2008 R2 Managing AD LDS using the AD PowerShell Module </A> </LI> <BR /> </UL> <BR /> <P> GOOD LUCK and ENJOY! </P> <BR /> <P> Kim "Sync or swim" Nichols </P> </BODY></HTML> Fri, 05 Apr 2019 02:43:55 GMT Ryan Ries 2019-04-05T02:43:55Z ....And knowing is half the battle! <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 31, 2012 </STRONG> <BR /> <A href="#" target="_blank"> Jonathan </A> here. <A href="#" target="_blank"> Chuck Timon </A> over on the <A href="#" target="_blank"> AskCore blog </A> has a new post that you folks testing with <A href="#" target="_blank"> Windows Server 2012 </A> should know about. If you're playing around with Hyper-V, do yourself a favor and have a read before you call Support. <BR /> <BR /> <A href="#" target="_blank"> Logon Failures Involving Virtual Machines in Windows Server 2012 </A> <BR /> <BR /> <P> Jonathan "Snake Eyes" Stephens </P> </BODY></HTML> Fri, 05 Apr 2019 02:43:05 GMT Ryan Ries 2019-04-05T02:43:05Z Digging a little deeper into Windows 8 Primary Computer <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 23, 2012 </STRONG> <BR /> <P> <EM> [This is a <A href="#" target="_blank"> ghost of Ned past </A> article – Editor] </EM> </P> <BR /> <P> Hi folks, <A href="#" target="_blank"> Ned </A> here again to talk more about the Primary Computer feature introduced in Windows 8. Sharp-eyed readers may have noticed this lonely beta <A href="#" target="_blank"> blog post </A> and if you just want a set-by-step guide to enabling this feature, <A href="#" target="_blank"> TechNet does it best </A> . Today I am going to fill in some blanks and make sure the feature's architecture and usefulness is clear. At least, I'm going to try. </P> <BR /> <P> Onward! </P> <BR /> <H2> Backgrounder and Requirements </H2> <BR /> <P> Businesses using Roaming User Profiles, Offline Files and Folder Redirection have historically been limited in controlling which computers cache user data. For instance, while there are group policies to assign roaming profiles on a per computer basis, they affect all users of that computer and are useless if youassign roaming profiles through legacy user attributes. </P> <BR /> <P> Windows 8 introduces a pair of new per-user AD DS attributes to specify a "primary computer." The primary computer is the one directly assigned to a user - such as their laptop, or a desktop in their cubicle - and therefore unlikely to change frequently. We refer to this as "User-Device Affinity". That computer will allow them to store roaming user data or access redirected folder data, as well as allow caching of redirected data through offline files. There are three main benefits to using Primary Computer: </P> <BR /> <OL> <BR /> <LI> When a user is at a kiosk, using a conference room PC, or connecting to the network from a home computer, there is no risk that confidential user data will cache locally and be accessible offline. This adds a measure of security. </LI> <BR /> <LI> Unlike previous operating systems, an administrator now has the ability to control computers that will not cache data, regardless of the user's AD DS profile configuration settings. </LI> <BR /> <LI> The initial download of a profile has a noticeable impact on logon performance; a brand new Windows 8 user profile is ~68MB in size, and that's before it's filled with "Winter is coming" meme pics. Since a roaming profile and folder redirection no longer synchronously cache data on the computer during logon, a user connecting from a temporary or home machine logs on considerably faster. </LI> <BR /> </OL> <BR /> <P> By assigning computer(s) to a user then applying some group policies, you ensure data only roams or caches where you want it. </P> <BR /> <P> <IMG src="" /> <BR /> Yoink, stolen screenshot from a <EM> much </EM> better artist </P> <BR /> <P> Primary Computer has the following requirements: </P> <BR /> <UL> <BR /> <LI> Windows 8 or Windows Server 2012 computers used for interactive logon </LI> <BR /> <LI> Windows Server 2012 AD DS Schema (but not necessarily Win2012 DCs) </LI> <BR /> <LI> Group Policy managed from Windows 8 or Windows Server 2012 GPMC </LI> <BR /> <LI> Some mechanism to determine each user's primary computer(s) </LI> <BR /> </UL> <BR /> <H2> Determining Primary Computers </H2> <BR /> <P> There is no attribute in Active Directory that tracks which computers a user logs on to, much less the computers they log on to the most frequently. There are a number of out of band options to determine computer usage though: </P> <BR /> <UL> <BR /> <LI> <STRONG> System Center Configuration Manager </STRONG> - SCCM has built in functionality to determine the primary users of computers, as part of its "Asset Intelligence" reporting. You can read more about this feature in <A href="#" target="_blank"> SCCM 2012 </A> and <A href="#" target="_blank"> 2007 R2 </A> . This is the recommended method as it's the most comprehensive and because I like money. </LI> <BR /> <LI> <BR /> <DIV> <STRONG> Collecting 4624 events </STRONG> - the Security event log Logon Event <STRONG> 4624 </STRONG> with a <STRONG> Logon Type 2 </STRONG> delineates where a user logged on interactively. By collecting these events using some type of audit collection service or <A href="#" target="_blank"> event forwarding </A> , you can build up a picture of which users are logging on to which computers repeatedly. </DIV> <BR /> <P> <IMG src="" /> </P> <BR /> <P> </P> <BR /> <P> </P> <BR /> </LI> <BR /> <LI> <BR /> <DIV> <STRONG> Logon Script </STRONG> – If you're the fancy type, you can create a logon script that writes a user's computer to a centralized location, such as on their own AD object. If you grant inherited access for SELF to update (for instance) the <STRONG> Comment </STRONG> attribute on all the user objects, each user could use that attribute as storage. Then you can collect the results for a few weeks and create a list of computer usage by user. </DIV> <BR /> <P> For example, this rather hokey illustration VBS runs as a logon script and updates a user's own Comment attribute with their computer's distinguished name, only if it has changed from the previous value: </P> <BR /> <P> Set objSysInfo = CreateObject("ADSystemInfo") </P> <BR /> <P> Set objUser = GetObject("LDAP://" &amp; objSysInfo.UserName) </P> <BR /> <P> Set objComputer = GetObject("LDAP://" &amp; objSysInfo.ComputerName) </P> <BR /> <P> </P> <BR /> <P> strMessage = objComputer.distinguishedName </P> <BR /> <P> if objUser.Comment = StrMessage then wscript.quit </P> <BR /> <P> </P> <BR /> <P> objUser.Comment = strMessage </P> <BR /> <P> objUser.SetInfo </P> <BR /> </LI> <BR /> </UL> <BR /> <P> </P> <BR /> <P> A user may have more than one computer they logon to regularly though and if that's the case, an AD attribute-based storage solution is probably not the right answer unless the script builds a circular list with a restricted number of entries and logic to ensure it does not update with redundant data. Otherwise, there could be excessive AD replication. Remember, this is just a simple example to get the creative juices flowing. </P> <BR /> <UL> <BR /> <LI> <STRONG> PsLoggedOn </STRONG> - you can script and run <A href="#" target="_blank"> PsLoggedOn.exe </A> (a Windows Sysinternals tool) periodically during the day for all computers over the course of several weeks. That would build, over time, a list of which users frequent which computers. This requires remote registry access through the Windows Firewall. </LI> <BR /> <LI> <STRONG> Third parties </STRONG> - there are SCCM/SCOM-like vendors providing this functionality. I don't have details but I'm sure they have a salesman who wants a new German sports sedan and will be happy to bend your ear. </LI> <BR /> </UL> <BR /> <H2> Setting the Primary Computer </H2> <BR /> <P> As I mentioned before, look at <A href="#" target="_blank"> TechNet for some DSAC step-by-step </A> for setting the <STRONG> msDS-PrimaryComputer </STRONG> attribute and the necessary group policies. However, if you want to use native Windows PowerShell instead of our interesting <A href="#" target="_blank"> out of band module </A> , here are some more juice-flow inducing samples. </P> <BR /> <P> The ActiveDirectory Windows PowerShell module <STRONG> get-adcomputer </STRONG> and <STRONG> set-aduser </STRONG> cmdlets allow you to easily retrieve a computer's distinguished name and assign it to the user's primary computer attribute. You can use assigned variables for readability, or with nested functions for simplicity. </P> <BR /> Variable <BR /> <P> <EM> &lt;$variable&gt; </EM> = get-adcomputer <EM> &lt;computer name&gt; </EM> </P> <BR /> <P> Set-aduser <EM> &lt;user name&gt; </EM> -add @{'msDS-PrimaryComputer'=" <EM> &lt;$variable&gt; </EM> "} </P> <BR /> <P> For example, with a computer named <STRONG> cli1 </STRONG> and a user name <STRONG> stduser </STRONG> : </P> <BR /> <P> <IMG src="" /> </P> <BR /> Nested <STRONG> </STRONG> <BR /> <P> Set-aduser <EM> &lt;user name&gt; </EM> -add @{'msDS-PrimaryComputer'=(get-adcomputer <EM> &lt;computer name&gt; </EM> ).distinguishedname} </P> <BR /> <P> For example, with that same user and computer: </P> <BR /> <P> <IMG src="" /> </P> <BR /> Other techniques <BR /> <P> If you use AD DS to store the user's last computer in their Comment attribute as part of a logon script - like described in the earlier section - here is an example that reads the <STRONG> stduser </STRONG> attribute <STRONG> Comment </STRONG> and assigns primary computer based on the contents: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> If you wanted to assign primary computers to all of the users within the <STRONG> Foo OU </STRONG> based on their comment attributes, you could use this example: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> If you have a CSV file that contains the user accounts and their assigned computers as DNs, you can use the <STRONG> import-csv </STRONG> cmdlet to update the users. For example: </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> This is particularly useful when you have some asset history and assign certain users specific computers. Certainly a good idea for insurance and theft prevention purposes, regardless. </P> <BR /> <H2> Cached Data Clearing GP </H2> <BR /> <P> Enabling Primary Computer does not remove any data already cached on other computers that a user does not access again. I.e. if a user was already using Roaming User Profiles or Folder Redirection (which, by default, automatically adds all redirected shell folders to the Offline Files cache), enabling Primary Computer means only that further data is not copied locally to non-approved computers. </P> <BR /> <P> In the case of Roaming User Profiles, several policies can clear data from computers at logoff or restart: </P> <BR /> <UL> <BR /> <LI> <STRONG> Delete user profiles older than a specified number of days on system restart </STRONG> - this deletes unused profiles after <EM> N </EM> days when a computer reboots </LI> <BR /> <LI> <STRONG> Delete cached copies of roaming profiles </STRONG> - this removes locally saved roaming profiles once a user logs off. This policy would also apply to Primary Computers and should be used with caution </LI> <BR /> </UL> <BR /> <P> <IMG src="" /> </P> <BR /> <P> In the case of Folder Redirection and Offline Files, there is no specific policy to clear out stale data or delete cached data at logoff like there is for RUP, but that's immaterial: </P> <BR /> <UL> <BR /> <LI> <BR /> <DIV> When a computer needs to remove FR after to becoming "non-primary" - due to the primary computer feature either being enabled or the machine being removed from the primary computer list for the user - the removal behavior will depend on how the FR policy is configured to behave on removal. It can be configured to either: </DIV> <BR /> <UL> <BR /> <LI> <STRONG> Redirect the folder back to the local profile </STRONG> – the folder location sets back to the default location in the user's profile (e.g., c:\users\%USERNAME%\Documents), the data copies from the file server to the local profile, and the file server location is unpinned from the computer's Offline Files cache </LI> <BR /> <LI> <STRONG> Leave the folder pointing to the file server </STRONG> –the folder location still points to the file server location, but the contents are unpinned from the computer's Offline Files cache. The folder configuration is no longer controlled through policy </LI> <BR /> </UL> <BR /> </LI> <BR /> </UL> <BR /> <P> In both cases, once the data is unpinned from the Offline Files cache, it will evict from the computer in the background after 15 minutes. </P> <BR /> <H2> Logging Primary Computer Usage </H2> <BR /> <P> To see that the <STRONG> Download roaming profiles on primary computers only </STRONG> policy took effect and the behavior at each user logon, examine the User Profile Service operational event log for Event 63. This will state either "This computer is a primary computer for this user" or "This computer is not a primary computer for this user": </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> The new User Profile Service events for Primary Computer are all in the Operational event log: </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Event ID </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> 62 </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Severity </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Warning </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Message </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Windows was unable to successfully evaluate whether this computer is a primary computer for this user. This may be due to failing to access the Active Directory server at this time. The user's roaming profile will be applied as configured. Contact the Administrator for more assistance. Error: %1 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Notes and resolution </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Indicates an issue contacting LDAP on a domain controller. Examine the extended error, examine System and Application event logs for further details, consider getting a network capture if still unclear </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Event ID </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> 63 </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Severity </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Informational </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Message </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> This computer %1 a primary computer for this user </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Notes and resolution </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> This event's variable will change from "IS" to "IS NOT" depending on circumstances. It is not an error condition unless this is unexpected to the administrator. A customer should interrogate the rest of the IT staff on the network if not expecting to see these events </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Event ID </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> 64 </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Severity </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Informational </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Message </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The primary computer relationship for this computer and this user was not evaluated due to %1 </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Notes and resolution </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Examine the extended error for details. </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <P> To see that the <STRONG> Redirect folders on primary computers only </STRONG> policy took effect and the behavior at each user logon, examine the Folder Redirection operational event log for Event 1010. This will state "This computer is not a primary computer for this user" or if it is (good catch, Johan from Comments) </P> <BR /> <P> <IMG src="" /> </P> <BR /> <H2> Architecture </H2> <BR /> <P> Windows 8 implements Primary Computer through two new AD DS attributes in the Windows Server 2012 (version 56) Schema. </P> <BR /> <P> Primary Computer is a client-side feature; no matter what you configure in Active Directory or group policy on domain controllers, Windows 7, Windows Server 2008 R2, and older family computers will not obey the settings. </P> <BR /> <H3> AD DS Schema </H3> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Attribute </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> Explanation </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> msDS-PrimaryComputer </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The primary computers assigned to a user or a security group containing users. Contains a multi-valued linked-value distinguished names that references the msDS-isPrimaryComputerFor backlink on a computer object </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> msDS-isPrimaryComputerFor </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> The users assigned to a computer account. Contains a multi-valued linked-value distinguished names that references the msDS-PrimaryComputer forward link on a user object </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <H3> Processing </H3> <BR /> <P> The processing of this new functionality is: </P> <BR /> <OL> <BR /> <LI> Look at Group Policy setting to determine if the msDS-PrimaryComputer attribute in Active Directory should influence the decision to roam the user's profile or apply Folder Redirection. </LI> <BR /> <LI> If step 1 is TRUE, initialize an LDAP connection and bind to a domain controller </LI> <BR /> <LI> Check for the required schema version </LI> <BR /> <LI> Query for the "msDS-IsPrimaryComputerFor" attribute on the AD object representing the current computer </LI> <BR /> <LI> Check to see if the current user is in the list returned by this attribute or in the group returned by this attribute and if so, return TRUE for IsPrimaryComputerForUser. If no match is found, return FALSE for IsPrimaryComputerForUser </LI> <BR /> <LI> <BR /> <DIV> If step 5 is FALSE: </DIV> <BR /> <OL> <BR /> <LI> For RUP, an existing cached local profile should be used if present. If there is no local profile for the user, a new local profile should be created </LI> <BR /> <LI> For FR, if Folder Redirection previously applied, the Folder Redirection configuration removes according to the removal action specified by the previously applied policy (this is retained in the local FR configuration). If there is no current FR configuration, there is no work to be done </LI> <BR /> </OL> </LI> <BR /> </OL> <BR /> <H2> Troubleshooting </H2> <BR /> <P> Because this feature is both new and simple, most troubleshooting is likely to follow this basic workflow when Primary Computer is not working as expected: </P> <BR /> <OL> <BR /> <LI> User assigned the correct computer distinguished name (or in the security group assigned the computer DN) </LI> <BR /> <LI> AD DS replication has converged for the user and computer objects </LI> <BR /> <LI> AD DS and SYSVOL replication has converged for the Primary Computer group policies </LI> <BR /> <LI> Primary Computer group policies applying to the computer </LI> <BR /> <LI> User has logged off and on since the Primary Computer policies applied </LI> <BR /> </OL> <BR /> <P> The logs of note for troubleshooting Primary Computer are: </P> <BR /> <DIV> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <BR /> <TABLE> <TBODY><TR> <TD> <BR /> <P> <STRONG> Log </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> <STRONG> Notes and Explanation </STRONG> </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Gpresult/GPMC RSoP Report </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Validates that Primary Computer policy is applying to the computer or user </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Group Policy operational Event log </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Validates that group policy in general is applying to the computer or user with specific details </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> System Event Log </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Validates that group policy in general is applying to the computer or user with generalities </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Application Event log </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Validates that Folder Redirection and Roaming User Profiles are working with generalities and specific details </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Folder Redirection operational event log </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Validates that Folder Redirection is working with specific details </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> User Profile Service operational event log </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Validates that Roaming User Profile is working with specific details </P> <BR /> </TD> </TR> <TR> <TD> <BR /> <P> <STRONG> Fdeploy.log </STRONG> </P> <BR /> </TD> <TD> <BR /> <P> Validates that Folder Redirection is working with specific details </P> <BR /> </TD> </TR> </TBODY></TABLE> <BR /> </DIV> <BR /> <P> </P> <BR /> <P> Cases reported by your users or help desk as Primary Computer processing issues are more likely to be AD DS replication, SYSVOL replication, group policy, folder redirection, or roaming user profile issues. Determine immediately if Primary Computer is at all to blame, then move on to the more likely historical culprits. Watch for red herrings! </P> <BR /> <P> Likewise, your company may not be internally aware of Primary Computer deployments and may send you down a rat hole troubleshooting expected behavior. Always ensure that a "problem" with folder redirection or roaming user profiles isn't just another group within the customer's company configuring Primary Computer and not telling you (this applies to you too; send a memo, dangit!). <STRONG> </STRONG> </P> <BR /> <P> Have fun. </P> <BR /> <P> Ned "shouldn't we have called it 'Primary Computers?'" Pyle </P> </BODY></HTML> Fri, 05 Apr 2019 02:42:58 GMT Ryan Ries 2019-04-05T02:42:58Z So long and thanks for all the fish <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Oct 12, 2012 </STRONG> <BR /> <P> My time is up. </P> <BR /> <P> It’s been eight years since a friend suggested I join him on a contract at Microsoft Support (thanks Pete). Eight years since I sat sweating in an interview with <A href="" target="_blank"> Steve Taylor </A> , trying desperately to recall the KDC’s listening port (his hint: “German anti-tank gun”). Eight years since I joined 35 new colleagues in a training room and found that despite my opinion, I knew nothing about Active Directory (“ <A href="#" target="_blank"> Replication of Absent Linked Object References </A> – what the hell have I gotten myself into?”). </P> <BR /> <P> Eight years later, I’m a Senior Support Escalation Engineer, a blogger of some repute, and a seasoned world traveler who instructs other ‘softies about Windows releases. I’ve created thousands of pages of content and been involved in countless support cases and customer conversations. I am the last of those 35 colleagues still here, but there is <A href="" target="_blank"> proof of my existence </A> even so. It’s been the most satisfactory work of my career. </P> <BR /> <P> Just the thought of leaving was scary enough to give me pause – it’s been so long since I knew anything but supporting Windows. It’s a once in a lifetime opportunity though and sometimes you need to reset your career. Now I’ll help create the next generations of Windows Server and the buck will finally stop with me: I’ve been hired as a Program Manager and am on my way to Seattle next week. I’m not leaving Microsoft, just starting a new phase. A phase with a lot more product development, design responsibility, and… meetings. Soooo many meetings. </P> <BR /> <P> There are two types of folks I am going to miss: the first are workmates. Many are support engineers, but also PFEs, Consultants, and TAMs. Even foreigners! Interesting and funny people fill Premier and Commercial Technical Support and make every day here enjoyable, even after the occasional customer assault. There’s nothing like a work environment where you really like your colleagues. I’ve sat next to <A href="" target="_blank"> Dave Fisher </A> since 2004 and he’s made me laugh every single day. He is a brilliant weirdo, like so many other great people here. You all know who you are. </P> <BR /> <P> The other folks are… you. Your comments stayed thought provoking and fresh for five years and 700 posts. Your emails kept me knee deep in <A href="" target="_blank"> mail sacks </A> and <A href="" target="_blank"> articles </A> (I had to <EM> learn </EM> in order to answer many of them). Your readership has made AskDS into one of the most popular blogs in Microsoft. You unknowingly played an immense part in my career, forcing me to improve my communication; there’s nothing like a few hundred thousand readers to make you learn your craft. </P> <BR /> <P> My time as the so-called “editor in chief” of AskDS is over, but I imagine you will still find me on the Internet in my new role, yammering about things that I think you’ll find interesting. I also have a few posts in the chamber that <A href="" target="_blank"> Jonathan </A> or <A href="" target="_blank"> Mike </A> will unload after I’m gone, and they will keep the site going. AskDS will continue to be a place for unvarnished support information about Windows technologies, where your questions will get answers. </P> <BR /> <P> Thanks for everything, and see you again soon. </P> <BR /> <BLOCKQUOTE> <BR /> <P> <IMG src="" /> <BR /> We are looking forward to Seattle’s famous mud puddles </P> <BR /> </BLOCKQUOTE> <BR /> <P> </P> <BR /> <P> - <A href="#" target="_blank"> Ned “42” Pyle </A> </P> </BODY></HTML> Fri, 05 Apr 2019 02:41:29 GMT Ned Pyle 2019-04-05T02:41:29Z AD FS 2.0 RelayState <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Sep 27, 2012 </STRONG> <BR /> <P> Hi guys, <A href="#" target="_blank"> Joji Oshima </A> here again with some great news! <A href="#" target="_blank"> AD FS 2.0 Rollup 2 </A> adds the capability to send RelayState when using IDP initiated sign on. I imagine some people are ecstatic to hear this while others are asking “What is this and why should I care?” </P> <H3> What is RelayState and why should I care? </H3> <P> There are two protocol standards for federation ( <A href="#" target="_blank"> SAML </A> and <A href="#" target="_blank"> WS-Federation </A> ). RelayState is a parameter of the SAML protocol that is used to identify the specific resource the user will access after they are signed in and directed to the relying party’s federation server. <BR /> <B> Note: </B> </P> <TABLE> <TBODY><TR> <TD> <P> If the relying party is the application itself, you can use the loginToRp parameter instead. <BR /> Example: <BR /> <A href="#" target="_blank"> </A> </P> </TD> </TR> </TBODY></TABLE> <P> </P> <P> Without the use of any parameters, a user would need to go to the IDP initiated sign on page, log in to the server, choose the relying party, and then be directed to the application. Using RelayState can automate this process by generating a single URL for the user to click and be logged in to the target application without any intervention. It should be noted that when using RelayState, any parameters outside of it will be dropped. </P> <H3> When can I use RelayState? </H3> <P> We can pass RelayState when working with a relying party that has a SAML endpoint. It does not work when the direct relying party is using WS-Federation. </P> <P> The following IDP initiated flows are supported when using Rollup 2 for AD FS 2.0: <BR /> </P> <UL> <LI> Identity provider security token server (STS) -&gt; relying party STS (configured as a SAML-P endpoint) -&gt; SAML relying party App </LI> <LI> Identity provider STS -&gt; relying party STS (configured as a SAML-P endpoint) -&gt; WIF (WS-Fed) relying party App </LI> <LI> Identity provider STS -&gt; SAML relying party App </LI> </UL> <P> The following initiated flow is not supported: <BR /> </P> <UL> <LI> Identity provider STS -&gt; WIF (WS-Fed) relying party App </LI> </UL> <H3> Manually Generating the RelayState URL </H3> <P> There are two pieces of information you need to generate the RelayState URL. The first is the relying party’s identifier. This can be found in the AD FS 2.0 Management Console. View the Identifiers tab on the relying party’s property page. </P> <P> <IMG src="" /> </P> <P> The second part is the actual RelayState value that you wish to send to the Relying Party. It could be the identifier of the application, but the administrator for the Relying Party should have this information. In this example, we will use the Relying Party identifier of <A href="#" target="_blank"></A> and the RelayState of <A href="#" target="_blank"></A> </P> <P> Starting values: <BR /> RPID: <A href="#" target="_blank"></A> <BR /> RelayState: <A href="#" target="_blank"> </A> </P> <P> </P> <TABLE> <TBODY><TR> <TD> <P> <B> Step 1: </B> The first step is to <A href="#" target="_blank"> URL Encode </A> each value. </P> </TD> </TR> <TR> <TD> <P> RPID: <BR /> RelayState: </P> </TD> </TR> </TBODY></TABLE> <P> </P> <P> <B> </B> </P> <TABLE> <TBODY><TR> <TD> <P> <B> Step 2: </B> The second step is to take these URL Encoded values, merge it with the string below, and URL Encode the string. <B> </B> </P> </TD> </TR> <TR> <TD> <P> String: <BR /> <B> RPID= </B> <I> &lt;URL encoded RPID&gt; </I> <B> &amp;RelayState= </B> <I> &lt;URL encoded RelayState&gt; <BR /> </I> </P> <P> String with values: <BR /> RPID= &amp;RelayState= </P> <P> URL Encoded string: <BR /> </P> </TD> </TR> </TBODY></TABLE> <P> </P> <P> <B> </B> </P> <TABLE> <TBODY><TR> <TD> <P> <B> Step 3: </B> The third step is to take the URL Encoded string and add it to the end of the string below. </P> </TD> </TR> <TR> <TD> <P> String: <BR /> ?RelayState= </P> <P> String with value: <BR /> ? </P> </TD> </TR> </TBODY></TABLE> <TABLE> <TBODY><TR> <TD> <P> <B> Step 4: </B> The final step is to take the final string and append it to the IDP initiated sign on URL. </P> </TD> </TR> <TR> <TD> <P> IDP initiated sign on URL: <BR /> <A href="#" target="_blank"></A> </P> <P> Final URL: <BR /> <A href="#" target="_blank"></A> </P> </TD> </TR> </TBODY></TABLE> <P> </P> <P> The result is an IDP initiated sign on URL that tells AD FS which relying party STS the login is for, and also gives that relying party information that it can use to direct the user to the correct application. </P> <P> <IMG src="" /> </P> <H3> Is there an easier way? </H3> <P> The multi-step process and manual manipulation of the strings are prone to human error which can cause confusion and frustration. Using a simple HTML file, we can fill out the starting information into a form and click the <B> <I> Generate URL </I> </B> button. </P> <P> <IMG src="" /> </P> <P> The code sample for this HTML file has been posted to <A href="#" target="_blank"> CodePlex </A> . </P> <H3> Conclusion and Links </H3> <P> I hope this post has helped demystify RelayState and will have everyone up and running quickly. </P> <P> <B> AD FS 2.0 RelayState Generator </B> <BR /> <A href="#" target="_blank"> </A> <BR /> <B> HTML Download </B> <BR /> <A href="#" target="_blank"> </A> </P> <P> <B> AD FS 2.0 Rollup 2 </B> <BR /> <A href="#" target="_blank"> </A> </P> <P> <B> Supporting Identity Provider Initiated RelayState </B> <BR /> <A href="#" target="_blank"> </A> </P> <P> Joji "Halt! Who goes there!" Oshima </P> </BODY></HTML> Fri, 05 Apr 2019 02:41:11 GMT Ryan Ries 2019-04-05T02:41:11Z Windows Server 2012 Shell game <HTML> <HEAD></HEAD><BODY> <STRONG> First published on TechNet on Sep 20, 2012 </STRONG> <BR /> <P> Here's the scenario, you just downloaded the RTM ISO for Windows Server 2012 using your handy, dandy, "wondermus" <A href="#" target="_blank"> Microsoft TechNet subscription </A> . Using Hyper-V, you create a new virtual machine, mount the ISO and breeze through the setup screen until you are mesmerized by the <A href="#" target="_blank"> Newton's cradle </A> -like experience of the circular progress indicator </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Click…click…click…click-- installation complete; the computer reboots. </P> <BR /> <P> You provide Windows Server with a new administrator password. Bam: done! Windows Server 2012 presents the credential provider screen and you logon using the newly created administrator account, and then… </P> <BR /> <P> <B> Holy Shell, Batman! I don't have a desktop! </B> </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Hey everyone, <A href="#" target="_blank"> Mike </A> here again to bestow some Windows Server 2012 lovin'. The previously described scenario is not hypothetical-- many have experienced it when they installed the pre-release versions of Windows Server 2012. And it is likely to resurface as we move past Windows Server 2012 <A href="#" target="_blank"> general availability on September 4 </A> . If you are new to Windows Server 2012, then you're likely one of those people staring at a command prompt window on your fresh installation. The reason you are staring at command prompt is that Windows Server 2012's installation defaults to Server Core and in your haste to try out our latest bits, you breezed right past the option to change it. </P> <BR /> <P> This may be old news for some of you, but it is likely that one or more of your colleagues is going to perform the very actions that I describe here. This is actually a fortunate circumstance as it enables me to introduce a new Windows Server 2012 feature. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> There were two server installation types prior to Windows Server 2012: full and core. Core servers provide a low attack surface by removing the Windows Shell and Internet Explorer completely. However, it presented quite a challenge for many Windows administrators as Windows PowerShell and command line utilities were the only methods used to manage the servers and its roles locally (you could use most management consoles remotely). </P> <BR /> <P> Those same two server installation types return in Windows Server 2012; however, we have added a third installation type: <B> Minimal Server Interface </B> . Minimal Server Interface enables most local graphical user interface management tasks without requiring you to install the server's user interface or Internet Explorer. Minimal Server Interface is a full installation of Windows that excludes: </P> <BR /> <UL> <BR /> <LI> Internet Explorer </LI> <BR /> <LI> The Desktop </LI> <BR /> <LI> Windows Explorer </LI> <BR /> <LI> Windows 8-style application support </LI> <BR /> <LI> Multimedia support </LI> <BR /> <LI> Desktop Experience </LI> <BR /> </UL> <BR /> <P> Minimal Server Interface gives Windows administrators - who are not comfortable using Windows PowerShell as their only option -&nbsp;the benefit&nbsp;a reduced attack surface and reboot requirement (i.e., on Patch Tuesday); yet GUI management while the ramp on their Windows PowerShell skills. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> <I> "Okay, Minimal Server Interface seems cool Mike, but I'm stuck at the command prompt and I want graphical tools. Now what?" </I> If you were running an earlier version of Windows Server, my answer would be reinstall. However, you're running Windows Server 2012; therefore, my answer is <I> "Install the Server Graphical Shell or Install Minimal Server Interface." </I> </P> <BR /> <P> Windows Server 2012 enables you to change the shell installation option after you've completed the installation. This solves the problem if you are staring at a command prompt. However, it also solves the problem if you want to keep your attack surface low, but simply are a Windows PowerShell guru in waiting. You can choose Minimal Server Interface ,or you can decided to add the Server Graphical Interface for a specific task, and then remove it when you have completed that management task (understand, however, that switching between the Windows Shell requires you to restart the server). </P> <BR /> <P> Another scenario solved by the ability to add the Server Graphical Shell is that not all server-based applications work correctly on server core, or you cannot management them on server core. Windows Server 2012 enables you to try the application on Minimal Server Interface and if that does not work, and then you can change the server installation to include the Graphical Shell, which is the equivalent of the Server GUI installation option during the setup (the one you breezed by during the initial setup). </P> <BR /> <H3> Removing the Server Graphical Shell and Graphical Management Tools and Infrastructure </H3> <BR /> <P> Removing the Server shell from a GUI installation of Windows is amazingly easy. Start Server Manager, click <B> Manage </B> , and click <B> Remove Roles and Features. </B> Select the target server and then click <B> Features </B> . Expand <B> User Interfaces and Infrastructure </B> . </P> <BR /> <P> To reduce a Windows Server 2012 GUI installation to a Minimal Server Interface installation, clear the <B> Server Graphical Shell </B> checkbox and complete the wizard. To reduce a Windows Server GUI installation to a Server Core installation, clear the <B> Server Graphical Shell </B> and <B> Graphical Management Tools and Infrastructure </B> check boxes and complete the wizard. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Alternatively, you can perform these same actions using the Server Manager module for Windows PowerShell, and it is probably a good idea to learn how to do this. I'll give you two reasons why: It's wicked fast to install and remove features and roles using Windows PowerShell and you need to learn it in order to add the Server Shell on a Windows Core or Minimal Server Interface installation. </P> <BR /> <P> Use the following command to view a list of the Server GUI components </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Get-WindowsFeature server-gui* </P> <BR /> <P> Give your attention to the <B> Name </B> column. You use this value with the <B> Remove-WindowsFeature </B> and <B> Install-WindowsFeature </B> PowerShell cmdlets. </P> <BR /> <P> To remove the server graphical shell, which reduces the GUI server installation to a Minimal Server Interface installation, run: </P> <BR /> <P> Remove-WindowsFeature Server-Gui-Shell </P> <BR /> <P> To remove the Graphical Management Tools and Infrastructure, which further reduces a Minimal Server Interface installation to a Server Core installation. </P> <BR /> <P> Remove-WindowsFeature Server-Gui-Mgmt-Infra </P> <BR /> <P> To remove the Graphical Management Tools and Infrastructure and the Server Graphical Shell, run: </P> <BR /> <P> Remove-WindowsFeature Server-Gui-Shell,Server-Gui-Mgmt-Infra </P> <BR /> <H3> Adding Server Graphical Shell and Graphical Management Tools and Infrastructure </H3> <BR /> <P> Adding Server Shell components to a Windows Server 2012 Core installation is a tad more involved than removing them. The first thing to understand with a Server Core installation is the actual binaries for Server Shell do not reside on the computers. This is how a Server Core installation achieves a smaller footprint. You can determine if the binaries are present by using the Get-WindowsFeature Windows PowerShell cmdlets and viewing the <B> Install State </B> column. The <B> Removed </B> value indicates the binaries that represent the feature do not reside on the hard drive. Therefore, you need to add the binaries to the installation before you can install them. Another indicator that the binaries do not exist in the installation is the error you receive when you try to install a feature that is removed. The <B> Install-WindowsFeature </B> cmdlet will proceed along as if it is working and then spend a lot of time around 63-68 percent before returning an error stating that it could not add the feature. </P> <BR /> <P> <IMG src="" /> </P> <BR /> To stage Server Shell features to a Windows Core Installation <BR /> <P> You need to get our your handy, dandy media (or ISO) to stage the binaries into the installation. Windows installation files are stored in WIM files that are located in the <B> \sources </B> folder of your media. There are two .WIM files on the media. The WIM you want to use for this process is INSTALL.WIM. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> You use DISM.EXE to display the installation images and their indexes that are included in the WIM file. There are four images in the INSTALL.WIM file. Images with the index of 1 and 3 are Server Core installation images for Standard and Datacenter, respectively. Images with the indexes 2 and 4 are GUI installation of Standards and Datacenter, respectively. Two of these images contain the GUI binaries and two do not. To stage these binaries to the current installation, you need to use indexes 2 and 4 because these images contain the Server GUI binaries. An attempt to stage the binaries using indexes 1 or 3 will fail. </P> <BR /> <P> You still use the <B> Install-WindowsFeature </B> cmdlets to stage the binaries to the computer; however, we are going to use the <B> -source </B> argument to inform <B> Install-WindowsFeature </B> the image and index it should use to stage the Server Shell binaries. To do this, we use a special path syntax that indicates the binaries reside in a WIM file. The Windows PowerShell command should look like </P> <BR /> <P> Install-WindowsFeature server-gui-mgmt-infra,server-gui-shell -source:wim:d:\sources\install.wim:4 </P> <BR /> <P> Pay particular attention to the path supplied to the -source argument. You need to prefix the path to your installation media's install.wim file with the keyword <B> wim: </B> You need to suffix the path with a <B> :4 </B> , which represents the image index to use for the installation. You must always use an index of 2 or 4 to install the Server Shell components. The command should exhibit the same behavior as the previous one and proceeds up to about 68 percent, at which point it will stay at 68 percent for a quite a bit, (if it is working). Typically, if there is a problem with the syntax or the command it will error within two minutes of spinning at 68 percent. This process stages all the graphical user interface binaries that were not installed during the initial setup; so, give it a bit of time. When the command completes successfully, it should instruct you to restart the server. You can do this using Windows PowerShell by typing the <B> Restart-Computer </B> cmdlets. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> Give the next reboot more time. It is actually updating the current Windows installation, making all the other components aware the GUI is available. The server should reboot and inform you that it is configuring Windows features and is likely to spend some time at 15 percent. Be patient and give it time to complete. Windows should reach about 30 percent and then will restart. </P> <BR /> <P> <IMG src="" /> </P> <BR /> <P> It should return to the <B> Configuring Windows feature </B> screen with the progress around 45 to 50 percent (these are estimates). The process should continue until 100 percent and then should show you the <B> Press Ctrl+Alt+Delete to sign in </B> screen </P> <BR /> <P> <IMG src="" /> </P> <BR /> <H3> Done </H3> <BR /> <P> That's it. Consider yourself informed. The next time one of your colleagues gazes at their accidental Windows Server 2012 Server Core installation with that <I> deer-in-the-headlights </I> look, you can whip our your mad Windows PowerShell skills and turn that Server Core installation into a Minimal Server Interface or Server GUI installation in no time. </P> <BR /> <P> Mike </P> <BR /> <P> <A href="#" target="_blank"> "Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin van-guarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet yo