configuration manager

ConfigMgr–Disk Space Compliance

One of the least utilized features in ConfigMgr is compliance items and baselines. For some reason most of my customers tend to forget that a small part of monitoring on the client side will go a long way towards reducing the amount of tickets to your helpdesk.

One of things you might wish to measure is free space left of on the OS drive. This is easily done with a small compliance item. This post will show you how and you can then expand this to do self cleaning and other features as well if you so wish.

Start with creating a Compliance Item by going to the Asset and Compliance Node, Compliance Settings and Configuration Items. Right click, Create Configuration Item and give it a suitable name. Click Next when ready.


Select the Operating systems that this can run on. Make sure to deselect the older OSes which do not support PowerShell and click next when done.


In the settings pane click new to create a new setting to monitor. Give it a name I use FreeSpace and then set Setting type to Script and Data type to Integer.


Click Add Script and add the script to get the frees pace percentage of the C drive. Click OK and next to get to the Compliance Rules pane.


The Script

$FreeSpace = (Get-Volume -DriveLetter C).SizeRemaining/(Get-Volume -DriveLetter C).size
[int]$Size = [math]::Round($FreeSpace,2)*100
return $Size

Click New to add a new rule, give the Rule a name and select the setting you just created. For rule type set it to Value and set the following values:
The value returned by the script: Greater than or equal to
The following values: <percent you wish to monitor> (I use 20)
Noncompliance severity for reports: Warning


Now the Configuration Item is done, just click next twice to save everything and create the CI.

For this to actually work a Baseline needs to be created. So head over to the Asset and Compliance workspace and the Compliance settings node and find Compliance Baselines. Right click and create a new baseline.

Give the baseline a name, click Add and select Configuration Item.


You get a list of all your CIs and just select the one you just created and click Add and OK.


Now you have a baseline you can deploy to a collection.

This can of course be expanded with things like non compliant collections, reports, remediation scripts and so on. You can also add other checks and verifications to the same baseline and monitor things like BitLocker encryption status.

Enable credential guard in configmgr

While working with at customer last we it was decided they wanted Credential Guard. Which in it self is a good thing. The problem was that they wanted this enabled as part of the Configuration Manager OSD.

Now normally automating things during ConfigMgr OSD isn’t to difficult however ConfigMgr has a problem with things that require double reboots. Since Hyper-V is a prerequisite for Credential Guard and Hyper-V requires a double reboot this poses a problem.

This might be solved by Microsoft in the future but for now you will have to employ a bit of a workaround. This consists of a couple of things, one is setting it up so you have a reboot not monitored by the task sequence and the other is installing the required roles and lastly you will also need to input the relevant registry values to enable the features.

Step 1 – Adding a reboot outside of the task sequence

This is something you should probably do anyway and it is documented in several blogpost before this one.

You will need to set a custom task sequence variable called SMSTSPostAction and set that to “Shutdown /r /t 30” this will cause a reboot 30 seconds after sequence thinks its done.


Step 2 – Creating the package

Download the script from here and put it in a folder on your CMSources share. Create a new package and a program and define the following as command line for running it: “PowerShell.exe –ExecutionPolicy ByPass –file “Enabled-CredentialGuard.ps1”

Don’t forget to enabled “Allow this program to be installed from the Install Package task sequence without being deployed”

Step 3 – Customize the task sequence

Lastly we customize the sequence to run this specific package at specific point in the sequence. The rule here is that it needs to be run after any other steps that can cause a reboot as the script will install and configure everything but the reboot should happen outside of the sequence as we configured it during step 1.

So for this customer that happens just before status is set to 5 as you can se in the picture below.


The last customization is to set an option on this to check for a task sequence variable. You should check for isUEFI equals true. This is to make this only applied to UEFI based machines as it will not work on legacy bios. If you want to you can add steps to check for Secureboot or other pre reqs.


The script – raw

Created:     2016-04-02
Version:     1.0
Author :     Peter Lofgren
Twitter:     @LofgrenPeter
Blog   :

This script is provided "AS IS" with no warranties, confers no rights and
is not supported by the author

Function Import-SMSTSENV{
        $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
        Write-Output "$ScriptName - tsenv is $tsenv "
        $MDTIntegration = "YES"
        #$tsenv.GetVariables() | % { Write-Output "$ScriptName - $_ = $($tsenv.Value($_))" }
        Write-Output "$ScriptName - Unable to load Microsoft.SMS.TSEnvironment"
        Write-Output "$ScriptName - Running in standalonemode"
        $MDTIntegration = "NO"
    if ($MDTIntegration -eq "YES"){
        if ($tsenv.Value("LogPath") -ne "") {
          $Logpath = $tsenv.Value("LogPath")
          $LogFile = $Logpath + "\" + "$LogName.log"
        Elseif ($tsenv.Value("_SMSTSLogPath") -ne "") {
          $Logpath = $tsenv.Value("_SMSTSLogPath")
          $LogFile = $Logpath + "\" + "$LogName.log"
        $Logpath = $env:TEMP
        $LogFile = $Logpath + "\" + "$LogName.log"
Function Start-Logging{
    start-transcript -path $LogFile -Force
Function Stop-Logging{


# Set Vars

$SCRIPTDIR = split-path -parent $MyInvocation.MyCommand.Path
$SCRIPTNAME = split-path -leaf $MyInvocation.MyCommand.Path
$SettingsFile = $SCRIPTDIR + "\" + $SettingsName
$LANG = (Get-Culture).Name
$OSV = $Null


#Try to Import SMSTSEnv


#Start Transcript Logging
. Start-Logging


#Output base info
Write-Output ""
Write-Output "$ScriptName - ScriptDir: $ScriptDir"
Write-Output "$ScriptName - SourceRoot: $SOURCEROOT"
Write-Output "$ScriptName - ScriptName: $ScriptName"
Write-Output "$ScriptName - SettingsFile: $SettingsFile"
Write-Output "$ScriptName - Current Culture: $LANG"
Write-Output "$ScriptName - Integration with MDT(LTI/ZTI): $MDTIntegration"
Write-Output "$ScriptName - Log: $LogFile"


#Enable Hyper-V
If ([environment]::Is64BitOperatingSystem -eq $True) {
  $InstallerName = "C:\Windows\sysnative\dism.exe"
Else {
  $InstallerName = "C:\Windows\system32\dism.exe"
$Arg = "/online /enable-feature /featurename:Microsoft-Hyper-V-Hypervisor /all /LimitAccess /Norestart"
Write-Output "About to run $InstallerName with arguments $Arg"
$Result = Start-Process -FilePath $InstallerName -ArgumentList $Arg -NoNewWindow -Wait -PassThru
Write-Output "Finsihed installing Hyper-V-Hypervisor with exitcode $($Result.ExitCode)"

$Arg = "/online /enable-feature /featurename:IsolatedUserMode /LimitAccess /Norestart"
Write-Output "About to run $InstallerName with arguments $Arg"
$Result = Start-Process -FilePath $InstallerName -ArgumentList $Arg -NoNewWindow -Wait -PassThru
Write-Output "Finsihed installing IsolatedUserMode with exitcode $($Result.ExitCode)"

$Arg = "/online /disable-feature /featurename:Microsoft-Hyper-V-Tools-All /Norestart"
Write-Output "About to run $InstallerName with arguments $Arg"
$Result = Start-Process -FilePath $InstallerName -ArgumentList $Arg -NoNewWindow -Wait -PassThru
Write-Output "Finsihed removing Hyper-V Tools with exitcode $($Result.ExitCode)"

#Enable Credential Guard
$Path = "HKLM:\SYSTEM\CurrentControlSet\Control\DeviceGuard"
New-Item -Path $Path -ItemType Directory -Force -ErrorAction SilentlyContinue
New-ItemProperty -Path $Path -Name EnableVirtualizationBasedSecurity -PropertyType 4 -Value 1 -ErrorAction SilentlyContinue
New-ItemProperty -Path $Path -Name RequirePlatformSecurityFeatures -PropertyType 4 -Value 1 -ErrorAction SilentlyContinue
New-ItemProperty -Path $Path -Name HypervisorEnforcedCodeIntegrity -PropertyType 4 -Value 0 -ErrorAction SilentlyContinue

New-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Control\Lsa -Name LsaCfgFlags -PropertyType 4 -Value 1 -ErrorAction SilentlyContinue


#Stop Transcript Logging
. Stop-Logging

ConfigMgr–Extending Hardware Inventory

So if you are using ConfigMgr you probably know that you can extend hardware inventory to inventory pretty much anything. The cool thing is that by almost default there are two classes that are really nice to have.

The first being Win32_QuickFixEngineering which is a WMI class listing all installed patches. Having that inventoried means you can build collections based on missing or certain patches installed which when it comes to critical patches or hotfixes are really nice.

The second is a kind of custom class and its called Microsoft_BDD_Info and it is created by ZTITatoo.wsf when you deploy a computer with either MDT or ConfigMgr with MDT integration. This includes a bunch of information from the deployment. For instance it lists which sequence ID was run and the timestamp. So if you inventory this you can keep collections based on what sequences was used and if you have a nice version control in your sequence you now find all your computer that where deployed with a certain version of sequence.

So how do you enable this awesomeness? Well its pretty simple. Fire off your ConfigMgr console and check under Administration and Client settings. You will have a setting called Default Client Settings. Open it up and on the left hand click Hardware inventory. Next click the button to the right where it says Classes. You will be presented with a long list of WMI classes that can be enabled and disabled. At the top just do a search for “Quick” and select the class Win32_QuickFixEngineering.


To enabled the second class a bit more work is required but don’t worry it is not hard. Go back to the client settings (if you left it) and click the Add button seen at the bottom (look at the image above for guidance). You now need to connect to a computer that has been deployed using MDT/ConfigMgr+MDT so click the connect button and type in the computer name and credentials if needed.


When connected you will see all the WMI classes available on that computer. Just find the one called Microsoft_BDD_Info and select it and click OK when done.


So now they have both been selected and you have saved the Clients Settings by clicking OK all the way out. Now all that remains is waiting for the next hardware inventory to complete and you can start using the values collected.

Happy deploying!


MDT Database – The PowerShell module fix

A long time ago Michael Niehaus wrote this brilliant module for PowerShell to manipulate the MDT database. Works great for bulk import of computers, creating assigning roles and so forth. You can read more and download the module from his blog here

The reason behind this blogpost is that there is an issue with the module, or with a view in the database used by the module. This gives the effect that when searching for computers you cannot use the description field to find them.

So if we take a look at my database I have two entries both with a description.


But when I have imported the module and connected to the database and use Get-MDTComputer -Description “LT-PETER” I get an error.


So me and Mikael Nyström (http:/// did some digging and found that there is a mismatch between the query and the view being used.

The Fix

There are two ways of fixing this. You can either do it manually or use the script I have included here.

The manual way. Open up SQL Management studio and browse to your database. Open up the view called dbo.ComputerSettings. Choose design and check the box in ci marked Description. Save and you’re done.


The script way, download the script here and run it using powershell the only thing you need to enter are the name to your sqlserver including instance name and the name of your mdt database.


The script can be run with parameters on one line or just run the script and it will ask for server and database name.


Now when you run the command it can find the computer!


Download Script


MDT Database and custom values

I recently wrote a post on why I use the MDT database and how to set it up, the link to the post is here There will almost always be scenarios that are not covered by the built-in features of MDT/SCCM and for this the database is excellent. The fact is that is supported and encouraged to extend the MDT database to cover other scenarios as well.

So this post will cover extending the database and using those custom values in a sequence.

Part 1 – Extending the database

First up is extending the database to have placeholders for whatever value you need. So on your MDT database box launch SQL management studio and open up your database.

Expand the tables view and then the table called dbo.settings. You will then have a “folder” called Columns. If you expand that you can see all the current settings on each object in the database.


Now just right click columns and select New Column.

You will be presented with a nice list on the right side, note that the last box is empty. Type in the name of the variable you want to create, in this case I will call it WSUS. Next you need to specify the type of value stored in the variable. For simplicity create a nvarchar(50) for this. This will create a stringvalue with a maximum of 50 characters. Don’t forget to save the table.


Well that’s all well and good but if you now look inside the MDT workbench you won’t be able to find the new value. This is because the rest of the databases needs to be refreshed. To do this we use a small SQL script. To run this create a new query and make sure it points to your MDT database.

Then run the following query:

EXECUTE sp_refreshview ‘[dbo].[ComputerSettings]’
EXECUTE sp_refreshview ‘[dbo].[LocationSettings]’
EXECUTE sp_refreshview ‘[dbo].[MakeModelSettings]’
EXECUTE sp_refreshview ‘[dbo].[RoleSettings]’

Note that there are single quote signs around the tables being refreshed. Make sure they are copied correctly from the website.

Run the query and then you can see the value inside the Workbench.


(Yes I have included other variables as well but for now we will focus on the WSUS variable)

Part 2 – Using the values

Now let’s put that variable to good use. The variable can be set either on the computer object or a role or any other way you want to.

Here I have created a role and named it DebugRole and then set the value WSUS to OFF. Next we need to make sure we can use the value so open up Customsettings.ini and have a look at the second row called Properties. If you already have custom properties(variables) its fine and you should already know how this work but if not here is how it works.

The row is a simple comma delimited row. So it will look something like this Properties=MyCustomProperty, WSUS

If you have more options its fine just add them before or after with a comma between each.

Ok when a computer that has the role is not image the value WSUS will be set to NO, well that’s just a value than that in itself will do nothing so now what?

In your customsettings.ini under the [Default] block add a line saying WSUS=ON. Open up your task sequence and scroll down to the steps for Windows update (pre and post application) and go to the Options tab on each of them. First make sure the are enabled by deselecting the default option to “disable this step” then click the add button and add a Task Sequence variable. Set the name to WSUS and the option to Equals ON.


Now it’s done! Your sequence will use a custom database value during deployment.

So how will this work then. Well a computer that has been assigned the role of DebugRole will have the WSUS variable set to OFF and as such it will not run the steps for Windows update.

This will decrease the time it takes to image a computer, making your testing faster. If you have other steps in your sequence, they can of course have the same type of condition on them and you can name the condition something like Debug instead and have a lot of steps that should run when debugging.

As a last note, the above have been done in MDT and the workbench but works the same way for SCCM with MDT integration. Haven’t integrated your SCCM with MDT? Do it now!



MDT Database – the why and the how

The Why

When I discuss with customers the most common response to the MDT database is “well we don’t need it, we can fill in the form for each computer that is just faster”

For me using the database is a given. It gives me flexibility and the result will always be the same regardless of who images a computer.

Imagine the following scenario: the user Bob’s computer has an error. To solve the problem, he calls the service desk and they create a ticket. Now he has to get the computer to the service desk or wait for an onsite technician to get to him to help him reimage is his computer. So it gets the correct name, apps etc.

Would it not be better to have all these thing predefined so Bob himself can reimage the computer and be back to work quicker? Most would agree. Then we have two main options.

  1. Generate data

This is a good option but only work for generic data or for information that does not have to be specific

  1. The database

This gives the option to preset information and in an easy way create roles for different type of information.

So if we look at the options above the usual settings that fit in category 1 is something like computer name, this can be generated and base on ex. the computers serial number.  Settings that normally fit in category 2 is applications or user specific settings, ex this user should have these applications.

So if you find this interesting, let’s move on to how to set this up.

The How

Setting up the database for use with either LTI (Lite touch) or ZTI (Zero Touch) is easy and requires no additional licenses or products (well almost, if you run LTI you will need SQL Express).

First up the SQL Server; you will need a SQL server. I would not recommend this to be part of your SQL server cluster as you need to enable Named Pipes as an authentication method. If you run LTI install SQL Express on your MDT server and if you are running ZTI well you should already have SQL on your primary site server, use that one.

The database will have an initial size of 4mb and after using it for a while and entering in a couple of computer well it might even grow to 10mb. So this will not be the database that takes all the memory or space from your server.

The database is created from inside the Deployment workbench and when created it is also supported to extend and modify the database.

Step 1 – Create the database

Open up the deployment workbench and in your deployment share go to Advanced Configuration. Right click the database and select new database.

Next follow the guide to create a new database and give it name.

Step 2 – Adding a computer

In this guide we will cover how to create a computer with the GUI. However, the nice Michael Niehaus has created a MDTDB PowerShell module so you can do batch importing and other modifications to the database with PowerShell, you can read his blogpost about it here:

To create a computer, select the computer node under database, right click the node and select new. Now it asks for a way of identifying the computer and you have four options: asset tag, serial number, mac address or UUID number. You ONLY need to enter one! You can fill in a description and I usually fill in the computer name. The only reason I do this is in the GUI the description is shown in the list of computers and gives me an easy way of identifying the computer.


Step 3 – Adding settings to the computer

Next we need to add some settings to the computer. You can view this as filling in the wizard without being there. If you look under the details tab you can fill in information for computer name, network adapter settings, domain join etc. This is pretty much all the settings then can be defined in the wizard and some extras as well.

Step 4 – Adding applications to the computer

You can also specify applications that should belong to that computer, this can be either ConfigMgr 2012 applications or LTI applications. You can also add ConfigMgr packages if that is what you use.

Step 5 – Adding roles

I will not cover how to create roles and since I have already done a post about that. You can find that here:

This post also covers how to create the database and link the settings into ConfigMgr.

Step 6 – Adding administrators

You can also add local administrators or domain groups/users that will become a local administrator of the computer

Step 7 – Configure rules

The last step to get this working is configuring rules. This ensure that as you deploy the computer it will query the database and get the relevant settings, applications, roles and administrators specified.

Under the advanced section right click on the database node and select “Configure database rules”. You will get a short wizard with what you should query for. Since this is a basic setup you can without any issues query for everything so leave everything selected and go through the wizard until its done.

The wizard will add a number of lines into customsettings.ini and if you are using LTI you are now ready to use the database. If you are using ZTI you need to copy the new information into the customsettings.ini in the settings package you have.


That’s it! You have successfully configured the database for use with either ZTI or LTI.

In the coming post I will cover extending the database with custom options so stay tuned.



Merge WIM into one – the space saver

I have gotten this question a couple of times “can i have two operating systems to choose from in one task sequence”. Well the correct answer to that is yes, but it takes up alot of unecessary space and if you are using ConfigMgr and need to download 2 wims instead of 1 well that adds alot of time.

What I would instead recommend is merging two Wim files into one, this will save alot of space and still give you the option to use different ref images in the same task sequence.

So how is this done?

First off you need to create two ref images. The most common senario for this is you have one with Office preinstalled and one without Office preinstalled. So if we look at how that looks you will get something like this:


In this case I am using Windows 10 ref images but this works just as well with Windows Vista, 7, 8 and 8.1 (all Wim based OSes).

So as you can see they are around 4-5GB in size. The next step now is to merge them. To help with this i have a small script that you can use.

What the script does is it takes one wim and mounts it. Then it applies the mounted wim into the other wim so you get two indexes and next it cleans up the mounted directory and finally displays the different indexes in the merged wim file.

You can download the script here:

When that is complete you get something looking like this.


As you can see the image is now a bit bigger but it has not doubled in size. This is due to the fact that when the wims are merged it will throw away all duplicate files to keep the file size down.

This method is the same method Microsoft have used when they have created Windows Server medias in the past containing core and gui versions on the same media.

The next step is to import this into whatever solution you are using (MDT/SCCM).

In this instance I have used MDT and it looks similar in SCCM but there are a couple of differentes. If you are unsure, drop me an email or pm and I can help you out.

So, import Operating system, custom image and point to the wim created erlier. When its done it looks something like this


If we look at the preferences for these two operating systems you can see that they both use the same file in the background but different indexes.


Now you can add another install operating system step and select different citeras to run the different steps. For instance, different blocks in CustomSettings.ini, some setting in the MDT database or add a new setting to the MDT database and use that. Use webservices and if the computer is this OU or AD group it should have office and if not it shouldn’t. The possibilities to create rules are as always limitless.

Happy deploying!


System Center Configuration Manager 2012 R2 SP1 CU2

Here we are, its time for another upgrade – another update with some bugfixes and goodies inside.

This time its called Cumulative Update 2 for System Center Configuration Manager 2012 SP2 and 2012 R2 SP1. This CU also includes a number of hotfixes that has been released since CU1. Among others the important one for drivers where the size gets a bit bloated.

Installing this is pretty straight forward. Start with making sure you have a backup of the system! This should always be done but since this includes a DB update aswell it’s extra important. Next up is a reboot for good measure. You can always check if you want to se if there are any pending reboots but I prefer just to do a clean reboot to make sure anyway.

If you want to make use of snapshots/checkpoints, make sure the VM is turned off when you take to snapshot/checkpoint. This is to ensure data consistency since there is SQL database in the background.

When this is done you apply the patch and wait for it to complete. Options for automatic upgrade and client package creation will be included in the wizard.

When you are done make sure to to a reboot as the CU requires it!

After the reboot you can start upgrading your clients. And when they are done it should look something like this.


Take not that the new version number is 5.00.8239.1301
Feel free to create a collection to include these so you also get collections with the previous version.

To create collections for most of the agent version you can use a powershell script i created. It can be downloaded here

The Cumulative Update can be found here

Good luck and make sure to have a backup!

Techdays Sweden Day 2

Techdays has ended its second day and with that its over for this year. It’s has been a good year with lots of fun sessions, new insights, new connections and last but not least lots and lots of cool demos!


Well this will be a bit of anticlimax as I missed the keynote on the second day. But from what I have been told it was a good keynote. My concern with this second hand is of course that it was Scott Hansel who did the keynote. While Scott is an expert speaker and fun to listen to he is also a developer and as an IT-pro, javascript and the likes doesn’t really give me anything. For me the keynote should be about the new technologies and how the roadmaps looks ahead or if that is not possible to share for some reason, I would like the keynote be an inspiration talk about something related to it. As an example a couple of years ago at a previous techdays the second day keynote was held by an inspiration speaker and it had nothing to do with it but at the same time it had everything to do with it. This is due to face that the speech was about “Getting things done” which most of us can relate to.

Enough rant!


Day 2 sessions for me was about the datacenter, the client, windows10, windows server 2016 and my favorite AzurePack vs AzureStack (more on this later)

Mikael Nyström (@Mikael_nystrom) and Markus Lassfolk (@Lassfolk) did a session about the modern datacenter and how to get there using no investment at all. The fact that this gives you the option to build your private cloud the fabric way with what you already have is one thing. The other is they showed how to get a good grip on your environment and how much old stuff is running and how you can consolidate that. To get this overview there is only one tool to use; Microsoft Assesment and Planning Toolkit and they of course showed you how to used it!

At the same time this was going on Marcus Murray (@MarcusSwede) and Hasian Alshakarti (@Alshakarti) showed how to hackproof your clients in a day. I have seen older versions and variations of this session before and this is well worth your time! Check this out when/if they release the sessions online!

Next up was “The Force Awakens” with Johan Arwidmark (@Jarwidmark) and Mikael Nyström (@Mikael_nystrom) showing how Windows 10 and Windows Server 2016 is better together! This sessions was full. Well full doesn’t really cover it. The seats where filled, people where standing along all three walls to see the session and every open spot on the floor had people sitting in it. If the room was sized for a 100 people my guess is that there where 200 people there. When this sessions shows up if you didn’t catch it. Go see it! In their fun and relaxed way they showed news in both the client and the server and how to deploy them both.

Ending the day for me was the highlight of techdays. Mikael Nyström (@Mikael_Nystrom) and Markus Lassfolk (@Lassfolk) showed how you should be doing Self-service and why in the session AzurePack vs. AzureStack.  Session content was about diffrences between AP and AS and how the roadmap for AP and AS looks. The demos where just amazing, showing how to build a new website in seconds, powershell to AzurePack the same way you would do it to Azure. Running Backup and Restore of a server within the AzurePack Portal from the correct DPM server. This is a must see and if you haven’t setup self-service yet. Now is a good time! If you don’t know how, let me know and I will put you in contact with both Markus and Mikael so they can help you out!


Hopefully you have a nice experience at Techdays and learned a lot. Gotten some ideas on what to do going forward, I know I have. Hope to see you next year and if you have any questions give a shout below, on twitter or facebook!

The future is now decoded! 😉 

Techdays Sweden Day 1

Day 1 of techdays comes to an end (yes there has been a pre-conf day aswell but I was unable to attend and this it the offical first day) and I can only sum up the day as overall a good day but with some minor failures.


To start of the keynote at Techdays. This for me is supposed to be one of the highlights. Showcasing new technology and well versed speakers with innovative content. The first part of the keynote fails to deliver this with speakers as Microsoft Sweden director Jonas Persson shows a couple of slides but no real demos, no new technology and has nothing new or exiting to offer. The only part of the presentation made by Jonas was when he promised that next year the the keynote speaker would be standing on a hover board made by him.

Thankfully this didn’t go on to long as Ben Armstrong (@VirtualPCGuy) soon took over and showcased a number of demos showing off new features of windows 10. While Ben is an excellent speaker this to fell a bit short the information around these features has been out for several months and really didn’t give us anything new.

First session of the day

Trying to save the morning after a keynote lacking in both content and ability to excite, Johan Arwidmark (@JArwidmark) took the stage to talk about Upgrading to Windows 10. Johan did as previous sessions with him an excellent job. There was new content, showcasing amongst other things the new servicing nodes in System Center Configuration Manager Technical Preview 3. He also talked about using the correct edition of Windows and would to all business recommend the Enterprise Edition with one exception and the is to university’s, schools and the like. For these the Education edition would be the way to go. The Education Edition is the same as Enterprise except for how its licensed and priced.

The Afternoon

Started off with a session from Mikael Nyström (@Mikael_nystrom) and Markus Lassfolk (@Lassfolk) about how to build a modern datacenter in a yellow case. An excellent session about automation, the key points in the datacenter and how you should learn build your demo and/or test environment rapidly using powershell. Mikael also showed off an unsupported feature in Microsoft Deployment Toolkit using the comments field in the Task Sequence selection dialog to select a role to go with a that Task Sequence. Mikael has also promised a blogpost on how to do this later, for information on that check out his blog at

Next I check out Ben Armstrongs (@VirtualPCGuy) session on Windows Containers and docker. A really interesting session on why they are doing, how it’s being developed and what to expect. For me this was a bit of an eyeopener. I have not work with docker before and seeing the integration and how the use the same tools and language as docker uses today with Linux is nice to se. A key feature here is when you begin to run containers, they can be run both on physical hardware and inside hyper-v. But coming later this year Hyper-V containers will also become available and this has been a driving force behind the nested Hyper-V scenario. And this is feature that is really cool! Stay tuned for more information on this as Microsoft makes it available.

Ending the day was left again to Mikael (@Mikael_nystrom) and Markus (@Lassfolk) showing off the new features they like in Windows Server 2016 Techincal Preview 3. A nice session about software defined storage, JIT and JEA (read on for more information on what it is), LockonDisconnect and other new features.

The big thing coming with the new release of Windows Server is Storage Spaces Direct. This is basically not using a SAS connected storage blob (JBOD) but instead using local drives in a number of server to achieve a highly-available file storage solution. This is then presented using storage spaces as normal. The benefit of using this that both HDD and SSD using SATA connections are a lot cheaper than the same SAS connected drives. This in combination with the new replication feature of storage also gives new ways of building the storage and making sure it is always available both locally and in remote sites.

Next was quick demo of the new witness function, using azure storage as cluster witness quorum give increased flexibility and is easily managed. Check back when the videos of the session are posted for a quick setup guide.

JIT and JEA is next up, with is translated into Just In Time and Just Enough Administrator. This give the ability to give people access to a server and using JIT and JEA only allowing them to do exactly what they are supposed to do and nothing else and also only when they are supposed to do it and not all the time. This is of course done through PowerShell, so if this is a feature you would like to learn more about and play around with and you don’t know PowerShell, now is a good time to learn.

Wrap Up

This wraps up day 1 and my distinct feeling after day one is; it’s more of the same and since the next server version still has a way to go. You should really focus on getting your deployment solution and yourself ready for windows 10. Both in management of it and deploying it. Both me and my Team would be happy to help you both get started or get over any issues you have with upgrading. Get in touch with me here, using email, twitter or facebook and we can set something up.

As a last note I will leave you with my three favorites quotes of the day. (they are in Swedish so if you don’t speak/read this, I’m sorry it loses the authenticity if translated)

          JIT och JEA är inte två fåglar

          Din demomiljö behöver vara ”wife approved”, tyst, drar lite ström och tar liten plats!

          Man får inte nobelpris i PowerShell