System Center

Enable credential guard in configmgr

While working with at customer last we it was decided they wanted Credential Guard. Which in it self is a good thing. The problem was that they wanted this enabled as part of the Configuration Manager OSD.

Now normally automating things during ConfigMgr OSD isn’t to difficult however ConfigMgr has a problem with things that require double reboots. Since Hyper-V is a prerequisite for Credential Guard and Hyper-V requires a double reboot this poses a problem.

This might be solved by Microsoft in the future but for now you will have to employ a bit of a workaround. This consists of a couple of things, one is setting it up so you have a reboot not monitored by the task sequence and the other is installing the required roles and lastly you will also need to input the relevant registry values to enable the features.

Step 1 – Adding a reboot outside of the task sequence

This is something you should probably do anyway and it is documented in several blogpost before this one.

You will need to set a custom task sequence variable called SMSTSPostAction and set that to “Shutdown /r /t 30” this will cause a reboot 30 seconds after sequence thinks its done.


Step 2 – Creating the package

Download the script from here and put it in a folder on your CMSources share. Create a new package and a program and define the following as command line for running it: “PowerShell.exe –ExecutionPolicy ByPass –file “Enabled-CredentialGuard.ps1”

Don’t forget to enabled “Allow this program to be installed from the Install Package task sequence without being deployed”

Step 3 – Customize the task sequence

Lastly we customize the sequence to run this specific package at specific point in the sequence. The rule here is that it needs to be run after any other steps that can cause a reboot as the script will install and configure everything but the reboot should happen outside of the sequence as we configured it during step 1.

So for this customer that happens just before status is set to 5 as you can se in the picture below.


The last customization is to set an option on this to check for a task sequence variable. You should check for isUEFI equals true. This is to make this only applied to UEFI based machines as it will not work on legacy bios. If you want to you can add steps to check for Secureboot or other pre reqs.


The script – raw

Created:     2016-04-02
Version:     1.0
Author :     Peter Lofgren
Twitter:     @LofgrenPeter
Blog   :

This script is provided "AS IS" with no warranties, confers no rights and
is not supported by the author

Function Import-SMSTSENV{
        $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
        Write-Output "$ScriptName - tsenv is $tsenv "
        $MDTIntegration = "YES"
        #$tsenv.GetVariables() | % { Write-Output "$ScriptName - $_ = $($tsenv.Value($_))" }
        Write-Output "$ScriptName - Unable to load Microsoft.SMS.TSEnvironment"
        Write-Output "$ScriptName - Running in standalonemode"
        $MDTIntegration = "NO"
    if ($MDTIntegration -eq "YES"){
        if ($tsenv.Value("LogPath") -ne "") {
          $Logpath = $tsenv.Value("LogPath")
          $LogFile = $Logpath + "\" + "$LogName.log"
        Elseif ($tsenv.Value("_SMSTSLogPath") -ne "") {
          $Logpath = $tsenv.Value("_SMSTSLogPath")
          $LogFile = $Logpath + "\" + "$LogName.log"
        $Logpath = $env:TEMP
        $LogFile = $Logpath + "\" + "$LogName.log"
Function Start-Logging{
    start-transcript -path $LogFile -Force
Function Stop-Logging{


# Set Vars

$SCRIPTDIR = split-path -parent $MyInvocation.MyCommand.Path
$SCRIPTNAME = split-path -leaf $MyInvocation.MyCommand.Path
$SettingsFile = $SCRIPTDIR + "\" + $SettingsName
$LANG = (Get-Culture).Name
$OSV = $Null


#Try to Import SMSTSEnv


#Start Transcript Logging
. Start-Logging


#Output base info
Write-Output ""
Write-Output "$ScriptName - ScriptDir: $ScriptDir"
Write-Output "$ScriptName - SourceRoot: $SOURCEROOT"
Write-Output "$ScriptName - ScriptName: $ScriptName"
Write-Output "$ScriptName - SettingsFile: $SettingsFile"
Write-Output "$ScriptName - Current Culture: $LANG"
Write-Output "$ScriptName - Integration with MDT(LTI/ZTI): $MDTIntegration"
Write-Output "$ScriptName - Log: $LogFile"


#Enable Hyper-V
If ([environment]::Is64BitOperatingSystem -eq $True) {
  $InstallerName = "C:\Windows\sysnative\dism.exe"
Else {
  $InstallerName = "C:\Windows\system32\dism.exe"
$Arg = "/online /enable-feature /featurename:Microsoft-Hyper-V-Hypervisor /all /LimitAccess /Norestart"
Write-Output "About to run $InstallerName with arguments $Arg"
$Result = Start-Process -FilePath $InstallerName -ArgumentList $Arg -NoNewWindow -Wait -PassThru
Write-Output "Finsihed installing Hyper-V-Hypervisor with exitcode $($Result.ExitCode)"

$Arg = "/online /enable-feature /featurename:IsolatedUserMode /LimitAccess /Norestart"
Write-Output "About to run $InstallerName with arguments $Arg"
$Result = Start-Process -FilePath $InstallerName -ArgumentList $Arg -NoNewWindow -Wait -PassThru
Write-Output "Finsihed installing IsolatedUserMode with exitcode $($Result.ExitCode)"

$Arg = "/online /disable-feature /featurename:Microsoft-Hyper-V-Tools-All /Norestart"
Write-Output "About to run $InstallerName with arguments $Arg"
$Result = Start-Process -FilePath $InstallerName -ArgumentList $Arg -NoNewWindow -Wait -PassThru
Write-Output "Finsihed removing Hyper-V Tools with exitcode $($Result.ExitCode)"

#Enable Credential Guard
$Path = "HKLM:\SYSTEM\CurrentControlSet\Control\DeviceGuard"
New-Item -Path $Path -ItemType Directory -Force -ErrorAction SilentlyContinue
New-ItemProperty -Path $Path -Name EnableVirtualizationBasedSecurity -PropertyType 4 -Value 1 -ErrorAction SilentlyContinue
New-ItemProperty -Path $Path -Name RequirePlatformSecurityFeatures -PropertyType 4 -Value 1 -ErrorAction SilentlyContinue
New-ItemProperty -Path $Path -Name HypervisorEnforcedCodeIntegrity -PropertyType 4 -Value 0 -ErrorAction SilentlyContinue

New-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Control\Lsa -Name LsaCfgFlags -PropertyType 4 -Value 1 -ErrorAction SilentlyContinue


#Stop Transcript Logging
. Stop-Logging


MDT handoff to SCCM

I will start by saying this is not in any of the best practices books but it works well and is used for certain scenarios.

Sometimes when I get to a customer they have MDT setup and working for OSD but someone higher up have decided that they need ConfigMgr to manage clients going forward. Don’t get me wrong I’m all for using ConfigMgr to manage clients but that being said not everyone finds ConfigMgr the easiest or most understandable platform to use. So the question then arises “Could we still use MDT to deploy the machines and then ConfigMgr to manage them?” and of course the answer is YES!

So how do we accomplish this? There are two ways and I will describe both but only show one.

The first way of doing it by using the excellent startup script created by Jason Sandys (found here it is easy to setup and only requires a small startup GPO and a file share. The upside to using this is that if someone for some reason didn’t get the agent during initial setup or someone uninstalled it from a client that is targeted by the GPO the client will get reinstalled. Jason has also managed to add some repair functions to it. So the downside then is that when using a GPO the client has to actually read the GPO and for that work the client has to be a member of the domain so workgroup computers are out.

The second way is what we are going to focus on for the rest of this post. That way is to install it during OSD in MDT as an application. The upside to doing it this way is, as soon as the deployment is done the client is also installed regardless of if the client joins a domain or not. Another upside to doing it this way instead of with a GPO is that if the client restarts at any point during deployment and the GPO is enabled the client will be installed during OSD possibly messing around while you are doing other installations or configuration steps.

So how do I do this? Well first off we need to create an application in MDT then we link that application into our sequence.

Step 1 – Creating the application

Create a folder named “CMAgent” so we have something to work with. Inside that create another folder called “Source”. Next to the Source folder place the script file and the xml which you download a bit further down in the post. In the Source folder you copy the client installation files from your site server in \\<your site server>\sms_<sitecode>\Client.

You should then have a folder that looks like this


Now we import that into MDT. So you give the application a name, point to your source folder and set a command line. For name I prefer Install – CCM Agent so I can easily see what the application does by just looking at the name. For command line you should use the following

PowerShell.exe –ExecutionPolicy ByPass –File Install-Application.ps1

If you open the application when its done it should look like this


Step 2 – Adding it to the Task Sequence

The next bit is to add it to the sequence in the correct spot to avoid it being installed and then messing with your deployment. So open your sequence go down all the way to the end and mark the step called Apply Local GPO Package, Click Add at the top and Create a group. Now name the group so you know what it does, either Custom Steps or as in this case I named it Custom Handoff. In that group we add a step for Install application. Change the step to install a single application and point to your newly imported application.

The sequence should then look something like this


Step 3 – Customizing the agent installation

The last thing you need to do is change some settings to point the agent to your specific environment. So open up your deploymentshare folder and browse to Applications\Install – CCM Agent. Use notepad to edit the settings.xml file and change the Installswitch section of the file. Below is a sample of how it can look, make sure to change it to suit your server name and infrastructure.


Your all set! Next time you image a computer it will then have the CCM agent installed.

Link to download the script is here

Happy deploying!


MDT Database – The PowerShell module fix

A long time ago Michael Niehaus wrote this brilliant module for PowerShell to manipulate the MDT database. Works great for bulk import of computers, creating assigning roles and so forth. You can read more and download the module from his blog here

The reason behind this blogpost is that there is an issue with the module, or with a view in the database used by the module. This gives the effect that when searching for computers you cannot use the description field to find them.

So if we take a look at my database I have two entries both with a description.


But when I have imported the module and connected to the database and use Get-MDTComputer -Description “LT-PETER” I get an error.


So me and Mikael Nyström (http:/// did some digging and found that there is a mismatch between the query and the view being used.

The Fix

There are two ways of fixing this. You can either do it manually or use the script I have included here.

The manual way. Open up SQL Management studio and browse to your database. Open up the view called dbo.ComputerSettings. Choose design and check the box in ci marked Description. Save and you’re done.


The script way, download the script here and run it using powershell the only thing you need to enter are the name to your sqlserver including instance name and the name of your mdt database.


The script can be run with parameters on one line or just run the script and it will ask for server and database name.


Now when you run the command it can find the computer!


Download Script


MDT Database – the why and the how

The Why

When I discuss with customers the most common response to the MDT database is “well we don’t need it, we can fill in the form for each computer that is just faster”

For me using the database is a given. It gives me flexibility and the result will always be the same regardless of who images a computer.

Imagine the following scenario: the user Bob’s computer has an error. To solve the problem, he calls the service desk and they create a ticket. Now he has to get the computer to the service desk or wait for an onsite technician to get to him to help him reimage is his computer. So it gets the correct name, apps etc.

Would it not be better to have all these thing predefined so Bob himself can reimage the computer and be back to work quicker? Most would agree. Then we have two main options.

  1. Generate data

This is a good option but only work for generic data or for information that does not have to be specific

  1. The database

This gives the option to preset information and in an easy way create roles for different type of information.

So if we look at the options above the usual settings that fit in category 1 is something like computer name, this can be generated and base on ex. the computers serial number.  Settings that normally fit in category 2 is applications or user specific settings, ex this user should have these applications.

So if you find this interesting, let’s move on to how to set this up.

The How

Setting up the database for use with either LTI (Lite touch) or ZTI (Zero Touch) is easy and requires no additional licenses or products (well almost, if you run LTI you will need SQL Express).

First up the SQL Server; you will need a SQL server. I would not recommend this to be part of your SQL server cluster as you need to enable Named Pipes as an authentication method. If you run LTI install SQL Express on your MDT server and if you are running ZTI well you should already have SQL on your primary site server, use that one.

The database will have an initial size of 4mb and after using it for a while and entering in a couple of computer well it might even grow to 10mb. So this will not be the database that takes all the memory or space from your server.

The database is created from inside the Deployment workbench and when created it is also supported to extend and modify the database.

Step 1 – Create the database

Open up the deployment workbench and in your deployment share go to Advanced Configuration. Right click the database and select new database.

Next follow the guide to create a new database and give it name.

Step 2 – Adding a computer

In this guide we will cover how to create a computer with the GUI. However, the nice Michael Niehaus has created a MDTDB PowerShell module so you can do batch importing and other modifications to the database with PowerShell, you can read his blogpost about it here:

To create a computer, select the computer node under database, right click the node and select new. Now it asks for a way of identifying the computer and you have four options: asset tag, serial number, mac address or UUID number. You ONLY need to enter one! You can fill in a description and I usually fill in the computer name. The only reason I do this is in the GUI the description is shown in the list of computers and gives me an easy way of identifying the computer.


Step 3 – Adding settings to the computer

Next we need to add some settings to the computer. You can view this as filling in the wizard without being there. If you look under the details tab you can fill in information for computer name, network adapter settings, domain join etc. This is pretty much all the settings then can be defined in the wizard and some extras as well.

Step 4 – Adding applications to the computer

You can also specify applications that should belong to that computer, this can be either ConfigMgr 2012 applications or LTI applications. You can also add ConfigMgr packages if that is what you use.

Step 5 – Adding roles

I will not cover how to create roles and since I have already done a post about that. You can find that here:

This post also covers how to create the database and link the settings into ConfigMgr.

Step 6 – Adding administrators

You can also add local administrators or domain groups/users that will become a local administrator of the computer

Step 7 – Configure rules

The last step to get this working is configuring rules. This ensure that as you deploy the computer it will query the database and get the relevant settings, applications, roles and administrators specified.

Under the advanced section right click on the database node and select “Configure database rules”. You will get a short wizard with what you should query for. Since this is a basic setup you can without any issues query for everything so leave everything selected and go through the wizard until its done.

The wizard will add a number of lines into customsettings.ini and if you are using LTI you are now ready to use the database. If you are using ZTI you need to copy the new information into the customsettings.ini in the settings package you have.


That’s it! You have successfully configured the database for use with either ZTI or LTI.

In the coming post I will cover extending the database with custom options so stay tuned.



Merge WIM into one – the space saver

I have gotten this question a couple of times “can i have two operating systems to choose from in one task sequence”. Well the correct answer to that is yes, but it takes up alot of unecessary space and if you are using ConfigMgr and need to download 2 wims instead of 1 well that adds alot of time.

What I would instead recommend is merging two Wim files into one, this will save alot of space and still give you the option to use different ref images in the same task sequence.

So how is this done?

First off you need to create two ref images. The most common senario for this is you have one with Office preinstalled and one without Office preinstalled. So if we look at how that looks you will get something like this:


In this case I am using Windows 10 ref images but this works just as well with Windows Vista, 7, 8 and 8.1 (all Wim based OSes).

So as you can see they are around 4-5GB in size. The next step now is to merge them. To help with this i have a small script that you can use.

What the script does is it takes one wim and mounts it. Then it applies the mounted wim into the other wim so you get two indexes and next it cleans up the mounted directory and finally displays the different indexes in the merged wim file.

You can download the script here:

When that is complete you get something looking like this.


As you can see the image is now a bit bigger but it has not doubled in size. This is due to the fact that when the wims are merged it will throw away all duplicate files to keep the file size down.

This method is the same method Microsoft have used when they have created Windows Server medias in the past containing core and gui versions on the same media.

The next step is to import this into whatever solution you are using (MDT/SCCM).

In this instance I have used MDT and it looks similar in SCCM but there are a couple of differentes. If you are unsure, drop me an email or pm and I can help you out.

So, import Operating system, custom image and point to the wim created erlier. When its done it looks something like this


If we look at the preferences for these two operating systems you can see that they both use the same file in the background but different indexes.


Now you can add another install operating system step and select different citeras to run the different steps. For instance, different blocks in CustomSettings.ini, some setting in the MDT database or add a new setting to the MDT database and use that. Use webservices and if the computer is this OU or AD group it should have office and if not it shouldn’t. The possibilities to create rules are as always limitless.

Happy deploying!


System Center Configuration Manager 2012 R2 SP1 CU2

Here we are, its time for another upgrade – another update with some bugfixes and goodies inside.

This time its called Cumulative Update 2 for System Center Configuration Manager 2012 SP2 and 2012 R2 SP1. This CU also includes a number of hotfixes that has been released since CU1. Among others the important one for drivers where the size gets a bit bloated.

Installing this is pretty straight forward. Start with making sure you have a backup of the system! This should always be done but since this includes a DB update aswell it’s extra important. Next up is a reboot for good measure. You can always check if you want to se if there are any pending reboots but I prefer just to do a clean reboot to make sure anyway.

If you want to make use of snapshots/checkpoints, make sure the VM is turned off when you take to snapshot/checkpoint. This is to ensure data consistency since there is SQL database in the background.

When this is done you apply the patch and wait for it to complete. Options for automatic upgrade and client package creation will be included in the wizard.

When you are done make sure to to a reboot as the CU requires it!

After the reboot you can start upgrading your clients. And when they are done it should look something like this.


Take not that the new version number is 5.00.8239.1301
Feel free to create a collection to include these so you also get collections with the previous version.

To create collections for most of the agent version you can use a powershell script i created. It can be downloaded here

The Cumulative Update can be found here

Good luck and make sure to have a backup!

Techdays Sweden Day 2

Techdays has ended its second day and with that its over for this year. It’s has been a good year with lots of fun sessions, new insights, new connections and last but not least lots and lots of cool demos!


Well this will be a bit of anticlimax as I missed the keynote on the second day. But from what I have been told it was a good keynote. My concern with this second hand is of course that it was Scott Hansel who did the keynote. While Scott is an expert speaker and fun to listen to he is also a developer and as an IT-pro, javascript and the likes doesn’t really give me anything. For me the keynote should be about the new technologies and how the roadmaps looks ahead or if that is not possible to share for some reason, I would like the keynote be an inspiration talk about something related to it. As an example a couple of years ago at a previous techdays the second day keynote was held by an inspiration speaker and it had nothing to do with it but at the same time it had everything to do with it. This is due to face that the speech was about “Getting things done” which most of us can relate to.

Enough rant!


Day 2 sessions for me was about the datacenter, the client, windows10, windows server 2016 and my favorite AzurePack vs AzureStack (more on this later)

Mikael Nyström (@Mikael_nystrom) and Markus Lassfolk (@Lassfolk) did a session about the modern datacenter and how to get there using no investment at all. The fact that this gives you the option to build your private cloud the fabric way with what you already have is one thing. The other is they showed how to get a good grip on your environment and how much old stuff is running and how you can consolidate that. To get this overview there is only one tool to use; Microsoft Assesment and Planning Toolkit and they of course showed you how to used it!

At the same time this was going on Marcus Murray (@MarcusSwede) and Hasian Alshakarti (@Alshakarti) showed how to hackproof your clients in a day. I have seen older versions and variations of this session before and this is well worth your time! Check this out when/if they release the sessions online!

Next up was “The Force Awakens” with Johan Arwidmark (@Jarwidmark) and Mikael Nyström (@Mikael_nystrom) showing how Windows 10 and Windows Server 2016 is better together! This sessions was full. Well full doesn’t really cover it. The seats where filled, people where standing along all three walls to see the session and every open spot on the floor had people sitting in it. If the room was sized for a 100 people my guess is that there where 200 people there. When this sessions shows up if you didn’t catch it. Go see it! In their fun and relaxed way they showed news in both the client and the server and how to deploy them both.

Ending the day for me was the highlight of techdays. Mikael Nyström (@Mikael_Nystrom) and Markus Lassfolk (@Lassfolk) showed how you should be doing Self-service and why in the session AzurePack vs. AzureStack.  Session content was about diffrences between AP and AS and how the roadmap for AP and AS looks. The demos where just amazing, showing how to build a new website in seconds, powershell to AzurePack the same way you would do it to Azure. Running Backup and Restore of a server within the AzurePack Portal from the correct DPM server. This is a must see and if you haven’t setup self-service yet. Now is a good time! If you don’t know how, let me know and I will put you in contact with both Markus and Mikael so they can help you out!


Hopefully you have a nice experience at Techdays and learned a lot. Gotten some ideas on what to do going forward, I know I have. Hope to see you next year and if you have any questions give a shout below, on twitter or facebook!

The future is now decoded! 😉