Finally the holiday season has started for me. Therefore this blog will be silent til the first week of January.
Friday, December 23, 2016
Yikes! Seems like Microsoft has released a TOTAL NEW AD MP! Which is quite awesome since the previous MP had some serious issues. Most of them seem to be fixed in this MP.
The version of this MP is 10.0.1.0. What has changed? A LOT!!! Taken directly from the guide:
Version 10.0.0.0 of the Management Pack for ADDS is an initial release of a new Management Pack for Active Directory® (AD). It is based on the Active Directory Management Pack (AD MP) and includes many changes from the AD MP.
- Removed Event Alert rules, all Error and Warning events from AD related event logs are now only collected in the Events collections.
- Informational events can be collected as well by turning on the Information Events rules.
- Replication Monitoring replaced with the following monitors:
- AD Replication Queue Monitor
- AD Show Replication Check
- Replication Partner Count Monitor
- Replication Consistency Monitor
- Removed Reliance on OOMADS.dll for Domain Controller monitoring removed oomads dependency from all MPs.
- Removed dependency on down-level DC discovery MPs
- Created well defined aggregate roll-ups for health monitors
- New server health monitors
- Strict replication
- DNS service
- Group Policy
- Network adapters
- Strict replication
- New domain member monitors
- Reliable time server
- Secure channel
- DC health
- Group policy
- Removed deprecated rules, alerts, and tools
- Added additional information to alerts and monitors and updated knowledge base information
- Added performance collection rules for DNS perf counters
As you can see, this is indeed a whole new MP. And on the outside it seems Microsoft has addressed many painpoints of the previous version.
This MP works on DCs running Windows Server 2012, 2012 R2 and 2016. It runs on SCOM 2012 R2 or later.
Want to download this MP? Go here.
Kevin Holman has also written a posting about this new MP.
Wednesday, December 21, 2016
Update 21-21-2016: As it turned out, additional ports have to be opened as well. Therefore I’ve updated this posting accordingly. Please know that this posting came to be using different resources, so don’t think I invented the wheel myself. As such I’ve updated the section ‘Used resources’ as well.
Suppose you’ve rolled out a VM with Windows Server 2016 Core and deployed on that same VM SQL Server 2016 (with the command line setup.exe /UIMODE=EnableUIOnServerCore /Action=Install).
Another VM runs Windows Server 2016 with Desktop Experience and is used as a Stepping Stone server, hosting all kinds of Consoles in order to manage the products/services hosted by many other VMs running the Core installation option.
On that server you start SQL Server Management Studio and want to connect to the previously installed SQL instance. However, all you get is this error message: ‘…Cannot connect to [SQL instance]. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 5)…’
When you’ve configured the SQL instance correctly during installation so that the account you’re using has access permissions, SQL and the VM hosting it, require additional configuration in order to access it remotely by SQL Server Management Studio.
Without the additional configuration you can’t access the SQL instance remotely.
Follow these steps and when done correctly, you’ll be able to access the SQL instance remotely by using SQL Server Management Studio.
- Ascertain that the SQL Server Browser Service is running and set to start automatically
Connect with Service Manager to the related VM and check out the SQL Server Browser Service. Correct when required so the service is running and set to start automatically.
- Enable remote connections on the instance of SQL Server
Do this locally on the VM hosting the related SQL instance. Use SQLCMD.exe locally and execute the following statements against the Server Core instance:
EXEC sys.sp_configure N'remote access', N'1'
RECONFIGURE WITH OVERRIDE
- Enable TCP/IP on the Instance of SQL Server
Do this locally on the VM hosting the related SQL instance. Start PowerShell when logged on.
Import SQL PS Module (Import-Module SQLPS) and run this PS script (copy & paste works ):
$smo = 'Microsoft.SqlServer.Management.Smo.'
$wmi = new-object ($smo + 'Wmi.ManagedComputer')
# Enable the TCP protocol on the default instance. If the instance is named, replace MSSQLSERVER with the instance name in the following line.
$uri = "ManagedComputer[@Name='" + (get-item env:\computername).Value + "']/ServerInstance[@Name='MSSQLSERVER']/ServerProtocol[@Name='Tcp']"
$Tcp = $wmi.GetSmoObject($uri)
$Tcp.IsEnabled = $true
- Create exceptions in Windows Firewall
Do this locally on the VM hosting the related SQL instance. Start PowerShell when logged on.
These two lines will allow remote access to the default SQL instance over TCP port 1433:
netsh firewall set portopening protocol = TCP port = 1433 name = SQLPort mode = ENABLE scope = SUBNET profile = CURRENT
netsh advfirewall firewall add rule name = SQLPort dir = in protocol = tcp action = allow localport = 1433 remoteip = localsubnet profile = DOMAIN
These two lines will allow remote access from SQL Server Management Studio to the SQL instance over TCP Port 1434 (aka SQL Admin Connection):
netsh firewall set portopening protocol = TCP port = 1434 name = SQLPort mode = ENABLE scope = SUBNET profile = CURRENT
netsh advfirewall firewall add rule name = SQLPort dir = in protocol = tcp action = allow localport = 1434 remoteip = localsubnet profile = DOMAIN
These two lines will allow SQL Broker traffic over TCP Port 4022:
New-NetFirewallRule -DisplayName "Allow inbound SQL Broker Traffic (TCP Port 4022)" -Direction inbound –LocalPort 4022 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound SQL Broker Traffic (TCP Port 4022)" -Direction outbound –LocalPort 4022 -Protocol TCP -Action Allow
These two lines will allow SQL-Transact traffic over TCP Port 135:
New-NetFirewallRule -DisplayName "Allow inbound SQL-Transact Traffic (TCP Port 135)" -Direction inbound –LocalPort 135 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound SQL-Transact Traffic (TCP Port 135)" -Direction outbound –LocalPort 135 -Protocol TCP -Action Allow
These two lines will allow SQL Browser traffic over TCP Port 2382:
New-NetFirewallRule -DisplayName "Allow inbound SQL Browser TCP Traffic (TCP Port 2382)" -Direction inbound –LocalPort 2382 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound SQL Browser TCP Traffic (TCP Port 2382)" -Direction outbound –LocalPort 2382 -Protocol TCP -Action Allow
These two lines will allow SQL Browser traffic over UDP Port 1434:
New-NetFirewallRule -DisplayName "Allow inbound SQL Browser UDP Traffic (UDP Port 1434)" -Direction inbound –LocalPort 1434 -Protocol UDP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound SQL Browser UDP Traffic (UDP Port 1434)" -Direction outbound –LocalPort 1434 -Protocol UDP -Action Allow
!!!Only when required!!!
These two lines will allow web traffic over TCP Port 80 (e.g for SSRS instances):
New-NetFirewallRule -DisplayName "Allow inbound HTTP Traffic (TCP Port 80)" -Direction inbound –LocalPort 80 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound HTTP Traffic (TCP Port 80)" -Direction outbound –LocalPort 80 -Protocol TCP -Action Allow
!!!Only when required!!!
These two lines will allow secure web traffic over TCP Port 443 (e.g for SSRS instances):
New-NetFirewallRule -DisplayName "Allow inbound HTTPS Traffic (TCP Port 443)" -Direction inbound –LocalPort 80 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound HTTPS Traffic (TCP Port 443)" -Direction outbound –LocalPort 80 -Protocol TCP -Action Allow
!!!Only when required!!!
These two lines will allow SQL Analysis traffic over TCP Port 2383:
New-NetFirewallRule -DisplayName "Allow inbound SQL Analysis Traffic (TCP Port 2383)" -Direction inbound –LocalPort 2383 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound SQL Analysis Traffic (TCP Port 2383)" -Direction outbound –LocalPort 2383 -Protocol TCP -Action Allow
Allow WMI traffic
When installing SCOM 2016 for instance, WMI traffic has to be allowed. By default the Windows Firewall on the SQL box blocks it, stopping the installation of SCOM 2016. With this PS oneliner WMI traffic is allowed.
netsh advfirewall firewall set rule group="Windows Management Instrumentation (WMI)" new enable=yes
No restart is required. Now all required SQL and WMI traffic to the SQL server is allowed.
- Create exceptions in Windows Firewall: https://msdn.microsoft.com/en-us/library/cc646023.aspx
- Enable remote management of SQL Server on Core: https://msdn.microsoft.com/en-us/library/hh882461.aspx
- TechNet TechCenter: Open Windows Firewall Ports for SQL Server PowerShell Script
Friday, December 16, 2016
Hyper-V 2016 ‘Bug’: WS 2016 Server LogonUI.exe Doesn’t Allow 2x [ESC] Key When Connected To VM In Enhanced Session Mode
Noticed this issue some time ago in my test lab but forgot to blog about it. None the less it can be a nagging issue, while the solution is simple. So here it is.
A new VM is deployed, based on WS 2016 Std, no GUI. When this VM is added to the domain and restarted, it defaults to the old credentials AND the old system name. This doesn’t work since one has to use another (AD based) account.
For this the LogonUI.exe screen tells you to hit the Escape key twice in order to enter alternate credentials. However, when connected to the VM with Enhanced Session mode, only the first [Escape] key entry is processed:
But now the second entry of the [Escape] key isn’t accepted.
Somehow when running an enhanced session with the related VM, the second hit of the [Escape] key isn’t passed to the VM.
You have to logon again and as such hit the [Escape] key two times. However, this time the second entry of the [Escape] key will be passed to the VM as well, allowing you to change to other user credentials:
When running a test lab on a tight budget it’s a challenge to get the most out of the available CPU, RAM and storage. Over the last years I learned some nice tricks in order to run the maximum amount of VMs on my test lab, and still having an acceptable performance.
Please be reminded, this approach of combined ‘tricks’ is only viable in test labs and shouldn’t be used in any production environment at any times! And no, I am NOT responsible for your test labs in any kind of way…
Some ground rules first
Here are some basics in order to get the most of the available hardware of your testlab.
- Run the parent Windows Server OS hosting the Hyper-V role from a ‘classic’ disk (no SSD) which has good performance (10K and lots of cache);
- Put the ISO and other software store on the classic disk;
- Run all the VMs from the availble SSDs, never ever from the ‘classic’disk!;
- Same goes for storing the meta data and memory of your VMs. Store them on the available SSDs, never ever on the classic disk.
Resource saver 01: Differencing Disks
When using differencing disks for ALL the VMs running on your test lab system, you save a LOT of storage. The parent disk contains the server OS and the differencing disk contains the delta’s for that particular VM. For instance, the VM running SQL will have a differencing disk containing the SQL installation and DB files, but use the parent disk running the server OS, containing between 9 to 14 GBs of data.
That parent disk will be used by all other VMs, resulting in MASSIVE disk cost savings per VM.
How to create a parent disk? That’s easy!
- Create a new VM
- Install Windows Server (2016 for instance)
- Configure it as required (time/date settings and so on)
- Install the latest updates
- Run SysPrep with this syntax: sysprep.exe /oobe /generalize /shutdown /mode:vm
- The VM will shutdown itself when done with sysprep
- Copy the related VHDX file to a new folder, like D:\_Differencing Disks
- Rename the VHDX file so you know exactly what this file is all about (eg: WS2016-DiffDisk-Std-GUI.vhdx for WIndows Server 2016 Std with Desktop Experience and WS2016-DiffDisk-Std-CLI.vhdx for Windows Server 2016 Std without Desktop Experience);
- Set the VHDX file to be read-only.
Now you’ve got yourself a nice parent disk. Read this posting in order to roll out a VM using this parent disk.
Resource saver 02: No GUI!
Yes, I know. Many Windows users are used to clicking through windows. Hence the name of the OS! BUT when running Windows Server 2016 Std without a GUI as a parent disk, one saves 4,5 GB compared to a parent disk hosting Windows Server 2016 Std with a GUI (Desktop Experience).
When running MANY VMs and as much of them using the no GUI version, one quickly saves tens of GBs!
Besides that, one learns how to work with Windows Server 2016 without a GUI, which is a good thing as well. Ever heard of the utility sconfig? It’s powerfull and helps one out with the basic configuration stuff:
Resource saver 03: Deduplication
Wow! This feature is totally awesome. And pretty easy to use on your Windows 2016 server hosting all the VMs. Simply add this Role (File Server > Data Deduplication) to your server:
Let it run as long as it takes. With PS cmdlet Get-DedupJob you’ll see the progress of the running dedup job(s).
When dedup is ready, fire up the VMs and you’re back in business! And of course, all these steps can be scripted with PowerShell as well. And this PS script can be scheduled as required.
Resource saver 04: Dynamic Memory
With dynamic memory you can squeeze the maximum utilization of the available RAM. And even ‘more’ when using Windows Server 2016 WITHOUT a GUI. Since this OS edition has a far lesser footprint on the available resources.
As such you can run VMs hosting AD domain controllers and DNS with only consuming 675 MBs of RAM! And with the dynamic memory config you can set the limit to 1024 MB max.
This way you get the most of the available RAM of your Hyper-V server.
Sure, everything can be put into the cloud. But guess what? Running 20+ VMs in Azure isn’t cheap. One saves a LOT of money when hosting those same VMs on an oversized desktop as a testlab .
When using it smart with all the resource savers I mentioned before, you’ll squeeze the max out of it, while still having a reasonable performance.
And when combined with Splashtop you can remotely wake up the testlab when required (some additional one time router configuration is needed here). As such this testlab doesn’t have to run 24/7 but is only fired up when required.
Some years ago I bought myself a new system in order to function as my personal test lab. Since budget didn’t allow for a state of the art system, I had to puzzle a lot. Yes, I needed storage with high IO, a reasonable fast CPU and fast AND loads of RAM.
But again, budget was limited. So after a lot of research I spent every euro of the allocated budget and got myself maximum value for money. All based on PC (desktop) hardware and not a single piece of server hardware because that was way outside the budget. But still the system I finally got allowed me to built my own test lab, running 16 VMs and still delivering good performance!
Since the system allowed for growth, in the past years I added more RAM, additional SSDs for storage and upgraded the CPU as well. On the server OS side of things the lab ran Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 and now Windows Server 2016.
The NIC ‘issue’
But I was always a bit hesitant to upgrade the parent Windows Server OS since the Intel desktop motherboard (DZ68DB series) in this system has some quirks. The integrated Intel 82579 Gigabit NIC won’t install by default on a Windows Server OS. It requires some additional steps in order to make it work. The reason here is that the driver BLOCKS the installation on any Windows Server OS by default!
On itself understandable, but quite frustrating after having spend all my available budget on my new to be test lab!
So with every new Windows Server OS upgrade I went through the same challenge. Of course, I could use another NIC instead. And believe me, I tried! But here another quirck came up: that other NIC (I tried different brands with different chipsets) never worked!
In my other systems the same NIC worked without a sweat, but in the would be server it was a no go. No matter what I tried. And believe me, I went deep! So I HAD to make the onboard Intel 82579 Gigabit NIC work, no matter what!
Intel 82579 Gigabit NIC vs ME: 0-1!!!!
When Windows Server 2016 went GA I decided to upgrade my whole lab to this new Server OS. So I had to face the challenge, making the Intel 82579 Gigabit NIC work with Windows Server 2016.
Last weekend it was show time! And to my surprise I finally found out myself how to address it rather quickly and within less than an hour, Windows Server 2016 installed the driver, resulting in a fully functional NIC!
I decided to share this, since the same approach can be used for making any Intel desktop NIC work on Windows Server 2016.
How the west was won
First Windows Server must be put into ‘test mode’. As such it accepts the installation of unsigned drivers. Follow this procedure:
- Run the command bcdedit -set loadoptions DISABLE_INTEGRITY_CHECKS
- And afterwards the command bcdedit -set TESTSIGNING ON
- Reboot the server
After the reboot the server is in test mode, as shown in the lower right corner of the desktop.
Now it’s time to get the hardware ID’s of the Intel NIC. You’ll need those ID’s later on.
- Open Device Manager, select the Intel NIC > Properties > tab Details
- As such you only need the yellow highlighted entries. Three of them in total.
With this information it’s time to ‘hack’ the INF file so the driver will install just fine.
- Download the Intel 82579 Gigabit NIC drivers for Windows 8 x64
- Run the installer BUT DON’T go through the wizard, instead search for a folder with a name like RarSFX0. Most of the times it’s located in the temp folder of your user profile, like C:\Users\[USERNAME]\AppData\Local\Temp\[RANDOM NAME, MOSTLY A NUMBER]\RarSFX0. In my case it was: C:\Users\Administrator\AppData\Local\Temp\2\RarSFX0
- Copy the folder PRO1000 to another location, like C:\Temp\Intel NIC for instance
- Close the Intell NIC installer> Cancel > Yes > Finish
- Open the folder ~\PRO1000\Winx64\NDIS64
- Open the e1d64x64.inf file, I used the default Notepad application to edit the file
- In the [ControlFlags] section delete ALL the 3 lines since this section blocks the installation of the driver on Windows Server! So this:
- Select and copy the three E153B lines found under the header [Intel.NTamd184.108.40.206]
- Paste the 3 lines in the [Intel.NTamd64.6.3] section below the E15A0 lines:
- Now you must alter the DeviceID of those copied lines so they match the earlier found three Hardware Ids:
- So now these three lines look like this:
- Save it
- Install the modified driver from Device Manager > select the NIC > Properties > Driver > Update Driver > choose Browse my computer for driver software > Browse > and select now the folder containing the modified file (in my case: C:\Temp\Intel NIC\PRO1000\Winx64\NDIS64) > Next
- You’ll get the warning about installing an unsigned driver, ignore it. Follow the wizard and soon the NIC is in working condition!
- As a last step, reenable the driver integrity checks and disable test signing again by using the following commands:
- bcdedit -set loadoptions ENABLE_INTEGRITY_CHECKS
- bcdedit -set TESTSIGNING OFF
- Reboot the server
- And now all is ready!
And as stated before, this method can be used with any other Intel NIC. Just be sure to use the correct Hardware Ids.
Tuesday, December 13, 2016
A few days ago Microsoft released an update for their Windows Server OS MP, version 6.0.7323.0.
Apparently the ‘author’ was a bit busy and forgot to finalize this important document…
But the changes in this MP are:
- Added Storport Miniport monitor for monitoring Event ID 153 in Windows Server 2003, 2008 and 2012 platforms.
- Fixed bugs:
- Logical Disk MB Free Space and Percentage Free Space monitor issues: Operator can set the threshold values for Error state even within Warning state default thresholds. At that, the Error state will supersede the Warning state according to the set values. Error threshold is independent of the Warning threshold.
- Fixed localization issue with root report folder in the Report Library.
- Windows Server 2003/20082008 R2/2012/2012 R2 Computer discovery was causing repeated log events (EventID: 10000) due to improper discovery of non-2003/2008/2008 R2/2012/2012 R2 Windows Server computers.
As such the changes aren’t that big. This update is more aimed at aligning this MP with the Windows Server OS MP which uses the same library Server OS MPs.
For a few weeks now the Windows Server 2016 MP (version 10.0.8.0) is available for download.
With the release of this MP Microsoft breaks with the tradition that a single Windows Server OS MP covers all versions covered by Mainstream Support, since this MP ‘only’ covers Windows Server 2016 installations, Nano server included.
Mp can be downloaded from here.
For some months the OMS Gateway with SCOM Support was in public preview.
Now it’s GA with these two significant updates:
- It’s available in 18 different languages;
- Microsoft Update supports this product.
You can either download the OMS Gateway from your OMS Workspace or the Azure Portal.
Want to know more? Go here.
Wednesday, November 23, 2016
I thought I understood already all there was to know about Resource Pools. But heck no! Wished I knew what Kevin has just posted when I wrote the chapter ‘Complex Configurations’ for the SCOM 2012 Unleashed book .
But back then I didn’t know. But now I do. And there is much more to Resource Pools than I thought possible. And it can be modified as well!
Totally awesome! Want to know more? Read Kevin’s posting and be amazed, just like me!
All credits go to Kevin Holman for sharing AND his colleague Mihai Sarbulescu turns out to be the SCOM Resource Pool guru! Changes are this man knows a lot more SCOM stuff as well, so perhaps other mind blowing postings are to be expected in the near future?
Tuesday, November 22, 2016
Some background information
In June 2015 Microsoft completed the acquisition of BlueStripe Software. Their flagship product being BlueStripe FactFinder, a dynamic monitoring solution which maps, monitors and troubleshoots distributed applications across heterogeneous operating systems and multiple datacenter and cloud environments.
On itself very impressive and when combined with SCOM it got even more impressive since it extended SCOM to an unprecedented level. It enabled SCOM to dynamically discover multi layered applications, build DA’s on the fly and show real time performance monitoring in the SCOM Console as well.
Sadly, when BlueStripe Software was acquired by Microsoft, the flagship product was pulled. Only updates for existing customers were available but that was just about it.
OMS it is…
Until now that is. Microsoft and the former BlueStripe people have worked hard in order to fold the BlueStripe FactFinder functionality into OMS as a Solution, branded Service Map, previously called Application Dependency Monitor.
It’s in Public Preview, so you can test drive it for free.
When you want to know more about this new OMS Solution go here and read the whole article all about Service Map, it’s capabilities and possibilities, written by Nick Burling, Principal Program Manager on the Enterprise Cloud Management Team.
Monday, November 21, 2016
As stated earlier, SCCM uses a new approach for it’s updates. Almost three to four times per year an update for SCCM becomes available. As a result, Microsoft speaks now of CaaS, ConfigMgr-as-a-Service.
IMHO, it’s a success. But who am I at the end of the day? Only one man with a blog that’s all. And sure, I get positive feedback from my customers when I ask them about their (update) experiences with CaaS.
But still, it only represents a small number of it all. Especially when we talk about SCCM/ConfigMgr.
So gladly Microsoft has published some numbers which are impressive:
- 50 million+ devices are managed by CaaS (1511 or later);
- 25 thousand+ organizations are using CaaS.
Want to know more? Read this posting by Brad Anderson and be – just like me – amazed & impressed.
Since SCCM 1511 a whole new update mechanism is introduced. In this new approach the Windows 10 update mechanism - where updates are pushed out in so called ‘rings’ - is used by SCCM 1511 and later as well.
As such SCCM is growing into a Software-as-a-Service model, titled CaaS, ConfigMgr-as-a-Service. As a result the latest & greatest version of SCCM is dubbed ConfigMgr Current Branch.
For all of my customers this approach works great. No more Googling required in order to see whether their SCCM environment is up to date. Instead the SCCM Console itself, tells the admins when an update is available.
And it doesn’t end there. SCCM also aids in rolling out the upgrade! Of course, a backup of the related VMs and SQL database is always advised, but still SCCM itself aids you in upgrading to the Current Branch, by:
- Notifying you when an update is available;
- Downloading it;
- Packaging it and upload it to the DPs;
- Running the Prerequisite Check in order to see the current environment is ‘upgrade ready’;
- Rolling out the upgrade;
- Upgrading all the related components;
- Warning you when something goes wrong (haven’t heard about this at all yet );
- Finishing up and cleaning up all the ‘mess’ which is normal after any upgrade.
As such, rolling out an upgrade of SCCM/ConfigMgr has evolved from a tedious and sometimes even hideous task, in a controlled workflow which is pretty solid.
This results in faster adoption of the Current Branch. So in order ‘to keep up’ one has to invest less more time, resources and budget.
Therefore I am hoping that one day the rest of the System Center 2016 stack will adopt the same approach as used by SCCM today.
It would lessen the administration burden significantly and help companies to grow into the idea that System Center-as-a-Service (SCaaS?) is good, helping them to adopt Azure based workloads and services even faster.
Hopefully Microsoft will choose for this approach one day. Please let me know how YOU think & feel about such an approach.
Ever wanted to test drive OMS without having to connect your own environment to it? So you can see what it does, how it works and what kind of services OMS can deliver for your organization?
Especially for this kind of scenario Microsoft has made the Operations Management Suite Experience Center.
What it offers/does? As Microsoft states: ‘…You will log-in as an administrator for an enterprise organization, Contoso. The environment has 500 servers, running on-premises as well as the cloud – in both Azure and AWS. The on-premises system is managed by System Center, and the key workloads being monitoring include; Exchange, SharePoint, SQL, and even MySQL running on Linux…’
With the OMS Experience Center you can test OMS without uploading a single bit of data from your servers. This will help you to build a proper business case for our organization starting to use OMS with their own servers.
Want to test drive OMS? Go to the Operations Management Suite Experience Center and sign up!
Suppose you’ve got a ConfigMgr 1606 (or older) environment and have heard about the Current Branch 1610 being available. How ever, as it’s globally rolled out, it might take some time before it’s available in your region.
Now there are two things you can do: WAIT, until it’s available and ConfigMgr will let you know when it’s there, OR run a PS script which puts you in the first wave of customers getting the update, AKA Early Update Ring.
This PS script is made by the The Configuration Manager Team, so you know it’s good . Script can be downloaded from here.
How it works
- Download the script which is packed as a signed executable so the code can’t be tampered with;
- Unpack the executable. By default it unpacks the PS script in the folder C:\EnableFastRing;
- Run the PS script EnableFastUpdateRing1610.ps1 and enter the name of your Site Server and hit <enter>;
- Open the ConfigMgr Console with admin permissions and go to \Administration\Overview\Cloud Services\Updates and Servicing. Hit the Check for Updates button and wait.
> !!!When upgrading from 1511, you must restart the SMS_Executive service first!!!<
- Happy upgrading!
Sunday, November 20, 2016
Last Friday Microsoft released the November Refresh for Azure Stack. Many deployment fixes and Azure PaaS services are added!
You can download it from here.
Microsoft released Azure Advisor under public preview.
As Microsoft states: ‘…While it’s easy to start building applications on Azure, making sure that the underlying Azure resources are setup correctly and being used optimally can be a challenging task…’
Therefore Microsoft released Azure Advisor which is ‘…a personalized recommendation engine that provides proactive best practices guidance for optimally configuring your Azure resources…’
What it does? Again, as Microsoft states: ‘…Azure Advisor analyzes your resource configuration and usage telemetry to detect risks and potential issues. It then draws on Azure best practices to recommend solutions that will reduce your cost and improve the security, performance, and reliability of your applications…’
Please know this service is under Public Preview. So you can use it for free. When it will become generally available I don’t know also not under what pricing. Yet, IMHO this service will help many companies to utilize their Azure resources in an optimal manner.
Want to know more? Go here.
Many times I am asked above question. Gladly, related to my interest of technologies that is . Mostly, the question comes down to: How do you keep up with all new technologies and development?
The answer is quite simple actually. I love to watch videos and with Microsoft Mechanics on Twitter I am quickly informed about ‘the latest & greatest’.
They have tons of videos (many of them only 60 seconds), podcasts, interviews and so on, all about what’s new and how it works.
Of course, with 60 seconds one won’t learn the deeper stuff, but still you learn where to place it in the bigger picture. And when requiring more knowledge there are other sides like Microsoft Virtual Academy, Microsoft Channel 9, lots of blogs and so on. And when that’s not enough, simply Google it (sorry, I don’t use Bing).
And yes, Microsoft Mechanics IS a MARKETING channel. So you’ve got to cut through the marketing mumbo jumbo. But even when that slicing is done, there is still a lot of worth while information to be found there.
Wonder what containers are (besides the obvious that is) and why IT in general and you specifically should know? And what containers can do for you (your company, your customers that is)?
Not wanting to read a tons of pages but under a hundred and STILL get a good and basic understanding of containerized applications (because that’s what I am talking about)?
Yeah, I know. Your field is IT Operations. So why should you care about application development? Let alone application life cycle? I mean, you’re NOT a developer. Duh!
Well guess what. The world we know is changing! And not at light pace but with the speed of light. As such, it comes in handy to know ‘a bit’ more about the world around you, the new technologies and the new world.
No, it’s not out there YET. But it’s changing already. And my guess is that containers are the next BIG thing and will make the revolution introduced by virtualization look like a walk in the park.
Hopefully I’ve made you curious. If so, READ the FREE e-book, all about Containerized Docker Application Lifecycle With Microsoft Platform & Tools.
And yes, 60 pages in total. Nice isn’t it?
Thursday, November 17, 2016
A few days ago Microsoft released an update for the Office 365 MP, version 7.1.5134.0.
Changes in this MP (taken directly from the related MP guide):
- Upgraded subscriptions authorization method: the monitoring is carried out by an Azure application, not a specific user. Introduced two options to create an application, essential for the monitoring: manual and automatic: Microsoft Office 365 Global Administrator credentials are required for the automatic option of Azure application creation, while Azure subscription can be used for the second (manual) option. See Managing Office 365 Subscriptions section for more details.
- Added Message Center messages categorization (see “Office 365 Incidents and Messages” section for details).
- Added a new Message Center messages type: Planned Maintenance.
- The Management Pack now inquires Office 365 Service Communications API V2; added a possibility to customize the endpoints and resource URIs in advanced subscription settings of the Office 365 wizard while calling the API. The above changes are provided for further support of Chinese subscriptions.
- Added “Configuring Proxy Connection” section to the guide;
- Fixed bug: if there were several locales (Australian, Russian, etc.) present on the workstation, the monitoring was stopping.
- Updated “Known Issues” section of the guide.
Office 365 MP, version 7.1.5134.0 can be downloaded from here.
Update (11-17-2016): Based on some valid feedback from a reader I added a section about costs. Thanks for the feedback, much appreciated.
This kind of question I am asked by many customers today. In their own environment they’re running SCOM 2012 R2. They know SCOM 2016 is GA and that OMS has also a lot to offer.
Good bye SCOM & hello OMS?
So why not skip SCOM 2016 all together and move their monitoring into OMS? Simply because OMS also uses the Microsoft Monitoring Agent (MMA), uses Intelligent Packs (IPs, the OMS equivalent of SCOM MPs) and offers a Gateway as well (AKA OMS Log Analytics Forwarder).
On itself a logical question, which isn’t answered that easy however. Simple because it depends on how you’re using SCOM today.
SCOM = Monitoring. OMS = Log Analytics +++
To put it simply, SCOM is a pure bred monitoring tool with some basic log analytics capabilities. On the other hand OMS is a super enhanced log analyzer, with some (still basic) monitoring capabilities folded into it.
So when you’re using SCOM in order to monitor workloads, distributed applications and so on, whether on-premise or in the cloud or anything in between, SCOM is still the place to go and the product to use.
Rich Alerting required? SCOM is the product to use
Also when you have SCOM alerting people when something is wrong with the monitored environment, SCOM is still the product to use. Also because at this moment OMS has only some basic alerting capabilities built into it. Whereas SCOM has by default predefined Alerts (based on the MPs imported), OMS doesn’t have that so most of the Alerts have to be pre-defined manually by you. Which is quite a challenge because you have to think up every possible situation requiring an Alert.
Log analytics required? OMS!
However, when you require a powerful log analytics tool with many preconfigured solutions, like security & auditing, SQL Assessment, AD Assessment and so on, OMS is the product to use. Or better, service.
The speed, dashboarding and possibilities to ‘dig through the collected data’ is totally awesome and unmatched by SCOM. And believe me, SCOM will never get to that level, ever.
So when you require hard log analytics capabilities, OMS is the place to be.
SCOM & OMS. Better together
Good thing is, SCOM & OMS can be combined. So you have the power of SCOM (rich monitoring and alerting) and the log analytics power of OMS. So you’ve the best of both worlds.
As we already know it’s quite easy to attach SCOM to OMS and from there, have a (sub)set of SCOM monitored servers (whether Windows or Linux) uploading data to OMS as well.
So now you have the power of SCOM and OMS. Totally awesome. The fun thing is, you can try this for free. OMS still offers a free data plan. It’s limited in the solutions it has, but still it will give you a good insight of the capabilities and power of OMS.
This brings me to another important topic: costs.
When your company already has a Software Assurance licensing agreement with Microsoft, changes are they have licenses for the entire System Center suite as part of the same SA. Leveraging OMS will result in an incremental cost on top of your current System Center licenses. Or you will wind up using the ingestion model at $2.30 per GB.
So it’s certainly worth the effort to find out whether your company has a SA in place with licenses for the entire System Center suite. When that’s the case you may use OMS for lower costs than expected.
If not, there is still the free data plan available, allowing you to test drive some OMS functionalities for free.
SCOM 2012 R2 or SCOM 2016?
When you’re on SCOM 2012 R2 level I strongly advise to upgrade to SCOM 2016. Why? There are many reasons, this is the Top 3:
- Growth of capabilities and functionality
What do you think? Will Microsoft add new capabilities and functionality to SCOM 2012 R2 or SCOM 2016? Exactly! So SCOM 2016 has more of a future ahead of itself compared to SCOM 2012 R2.
- Know what the future will bring? No?
Neither do I. But it’s better to prepare yourself for it. Thus rolling out ‘the latest & greatest’ is a better approach, compared to holding on to SCOM 2012 R2 up an beyond July 2017. Sometime, many times earlier than expected, you end up with an unsupported product. Meaning it isn’t covered by SCOM 2012 R2 but SCOM 2016 instead. Ouch!
Microsoft goes by the mantra ‘Cloud & Mobile First’. So it’s evident that OMS will keep on growing BIG time. Things we’re missing at this moment (like real monitoring, objects and health states included) with rich Alerting, are most likely to be added sometime in the future. Until then however, SCOM is the product delivering this functionality out of the box.
So SCOM still has a valid business case, and will have that for the years to come. None the less, it can’t and won’t hurt to take a look at OMS and start using it (the free data plan is a good start). Also combine it with SCOM and go from there.
What surprises me the most is the pace of growth in OMS. In less than two years, tons of new features are added. And that pace of growth won’t lessen. I know that for sure. So we’ll see new features, improvement of the existing ones and so on.
When running SCOM 2012 R2 for rich monitoring and Alerting, SCOM is still the product to use. However, this doesn’t exclude the usage or use case scenario’s for OMS.
OMS delivers rich and enhanced log analytics capabilities. Combined with SCOM you’ve yourself a rich monitoring and log analytics platform at hand, so now you can drill deep into the very core of your IT assets, then you ever imagined.
It will be an exciting journey, starting with SCOM 2016 on-premise and OMS in the cloud.
Note: This article is a cross post from contains copied text from this article written by The Scripting Guys, a Microsoft blog all about PowerShell and OMS.
Last August Microsoft introduced the advanced detection capability in OMS Security. It scans more than seven billion events per day(!) and analyzes them to generate useful detections.
OMS Security advanced detections are provided as a service, which means that customers don’t have to create or maintain the infrastructure and write threat detection rules. Microsoft does it for them on a global scale and brings Microsoft’s vast security knowledge and tools into play.
Microsoft is continuously adding new patterns and new detection types to keep up with the latest attack techniques. Microsoft keeps monitoring the detections to reduce the false positive detections as much they can.
Since yesterday this service is available in Europe as well and is automatically enabled for all OMS Security customers.
Want to know more about this powerfull feature, which is RTU (Ready To Use) without requiring any configuration at all, except for rolling out the Microsoft Monitoring Agent to the systems you want to cover, or to connect your SCOM environment to OMS? Go here.
OMS is growing on an almost weekly basis in capabilities and coverage, if not daily. Features like this one are really usefull and offer a good insight in how secure your organization really is and whether there are breaches. Normally it would take a lot of time, resources and money to roll out such a service. And now it’s available with just a few mouse clicks for a very affordable price!
For me this is a typical showcase of the power of the cloud and the services it has to offer.
Friday, November 4, 2016
Since a few days the new Integration Packs (IPs) for Orchestrator 2016 are available for download:
- System Center Components and other Microsoft Technologies
- HP iLO & OA
- HP OM
- HP SM
- IBM Tivoli Netcool/OMNIbus
- VMware vSphere
Click on the links of the IPs you require for your environment.
Saturday, October 22, 2016
In this posting I’ll write about my upgrade experiences of one of my SCOM 2012 R2 UR#11 environments to SCOM 2016 GA (SCOM 2016 + UR#01). The SCOM 2012 R2 environment is rather small, but still representative for many SCOM 2012 R2 environments, since it exists out of two SCOM Management Servers and a few SCOM Agents.
Because the upgrade process of a SCOM Agent to SCOM 2016 is the same, no matter the amount of them, IMHO the upgrade of my SCOM 2012 R2 UR#11 environment is applicable to many SCOM environments. The only thing lacking here is at least one SCOM Gateway Server. Since a Gateway Servers is in essence a SCOM Agent with some additional features, upgrading such a server is a walk in the park, especially compared to upgrading a SCOM Management Server.
This is my test lab:
- 2x SCOM Management Servers;
- No SCOM Gateway Servers;
- 1x dedicated SQL instance, hosting both SCOM SQL databases;
- 4x SCOM Agent.
Side note: Yes, it’s a small environment but it runs locally on my laptop, besides a SCCM environment providing FEP functionality for all VMs, an additional SQL server and a DC. So I think it’s still quite something for an average notebook .
All servers involved run Windows Server 2012 R2. SQL Server 2012 SP1 x64 is used for the SQL instance hosting the SCOM SQL databases.
There is much to tell, so let’s start.
A 01: Pre-Upgrade Tasks
NEVER EVER skip the Pre-Upgrade Tasks! Preparation is key, otherwise the upgrade is most likely to fail which is bad. And something else as well:
Before you start BACKUP! ALL SCOM Management Servers to be more specific. When on VMs, make snapshots or clones. And for the SCOM SQL databases: make VALID backups!
Microsoft recently published a TechNet article all about the Pre-Upgrade Tasks. So I won’t repeat them but only highlight some steps here. STILL TAKE CARE TO COVER ALL STEPS AS MENTIONED IN THIS TECHNET ARTICLE!!!
- Step 2: Clean the Database (ETL Table)
This is serious business. In any real life SCOM environment this table may become quite big. In every day life one doesn’t notice this. But when upgrading this table will become the culprit of slow (ever lasting) upgrades and may eventually fail. So RUN the query in order to clean up that ETL table.
- Step 7: Stop & disable all related SCOM services on the Management Servers when not being upgraded
This step makes perfect sense. Simply because when running the upgrade on a given SCOM Management Server, the underlying SCOM databases are also ‘touched’.
Even more when it’s the first SCOM Management Server to be upgraded. In such a case the SCOM databases are upgraded as well, and when done, a flag is set so when the next SCOM Management Server is upgraded, the SCOM databases aren’t upgraded AGAIN…
However, even when the SCOM databases are already upgraded, and another SCOM Management Server is upgraded, the SCOM databases are touched. In cases like that it’s better not having other SCOM Management Server running write/read actions on those very same databases in the same time.
Therefore, only have the related SCOM services in a RUNNING state on the SCOM Management Server which is being upgraded, and on all other SCOM Management Servers, stopped & disabled.
- Step 11: Disable the Operations Manager website in IIS
Since SCOM 2016 is in the process to ditch the Silverlight dependency in the SCOM Web Console, the SCOM 2012 R2 UR#11 Web Console gets an overhaul as well. Therefore it’s better to disable this website in IIS, so the upgrade process won’t find any locked handles or processes on it. Also enhancing the change of success of the upgrade.
A 02: Pre-Upgrade Steps I like to add
In addition to the earlier mentioned TechNet article there are a few additional pre-upgrade steps I would like to add as well here.
- Install the SCOM 2016 Console prereqs on the SCOM Management Servers where the SCOM 2012 R2 UR#11 Console is also installed.
Even though the SCOM 2016 installation/upgrade wizard checks for these prereqs:
It’s better to avoid those Alerts and to install these two prereqs yourself in advance. So install Microsoft CLR Types for SQL Server 2014 and the SQL 2015 Report Viewer Controls in advance.
- DOUBLE check the environment
Meaning: Is your current SCOM 2012 R2 UR#11 environment upgradable? Do the underlying Windows Server OS’s AND SQL instances support SCOM 2016?
Use this TechNet article to verify it. When some components aren’t supported in SCOM 2016, address those issues first before upgrading to SCOM 2016.
- SCOM 2016 UR#01
As stated earlier, UR#01 for SCOM 2016 brings it on GA level. So there is NO doubt whether or not to install those bits as well. SIMPLY INSTALL IT! So download the required bits and when your SCOM 2012 R2 UR#11 environment is upgraded to SCOM 2016, install UR#01 right after it.
In order to save time and work, you could skip the SCOM Agents. Simply upgrade the SCOM 2012 R2 UR#11 Agents to SCOM 2016 UR#1 right away, since the SCOM 2012 R2 UR#11 Agents communicate with the SCOM 2016 UR#01 environment as well, thus buying you more time .
So download the SCOM 2016 UR#01 bits before starting the upgrade to SCOM 2016 RTM.
Also good to know: SCOM 2016 UR#01 only touches the SCOM Management Server, Console and Web Console. Not the SCOM Gateway Servers. The SCOM Agent is touched as well since the Agent staging folders on the SCOM Management Servers do get a MSP update package as well (KB3190029-amd64-Agent.msp for x64 and KB3190029-i386-Agent.msp for x86).
- SCOM SDK Account
When upgrading SCOM 2016 will ask for the SCOM SDK (Data Access) credentials. So make sure you’ve got them at hand.
Run the upgrade with an account which is local admin on the SCOM Management Servers to be upgraded, has SCOM admin permissions in SCOM and has SQL sysadmin permissions on the SQL instances hosting the SCOM SQL databases. Otherwise the upgrade will fail!
B 01: Upgrade – First SCOM Management Server
Now it’s time to start the upgrade and I start with the first SCOM 2012 R2 UR#11 Management Server, also hosting the RSME role. it also hosts the SCOM Web Console and the Console.
Additional information regarding the required UR level:
For the upgrade itself Microsoft has also recently published an updated TechNet article, to be found here. And as you can see, SCOM 2012 R2 doesn’t have to be on UR#11 level, since the upgrade can also be run from UR#9. My guess is however, that most SCOM 2012 R2 environments will be on UR#11 already. When it’s not, but on UR#9 level you don’t have to roll out UR#11 first. Just make sure you meet all requirements and go through all pre-upgrade tasks successfully. Then you’re ready to upgrade to SCOM 2016 just as well.
I won’t post all screenshots, but only the most important ones. And apologies for the lack of quality of those very same screenshots. I recorded the upgrade with the built-in Steps Recorder, not knowing the screens are saved in a lower quality .
And YES I’ve stopped & disabled all SCOM services on all other SCOM Management Servers which aren’t upgraded at that moment.
B 02: Upgrade – Second SCOM Management Server
This upgrade runs much faster since the related SCOM databases are already successfully upgraded (as such the upgrade flags are set in those databases, telling the upgrade to skip them) and this second SCOM Management Server only runs the Console, NOT the Web Console.
Since I stopped & disabled the SCOM services here before upgrading the first SCOM Management Server, I start and set them to start automatically them BEFORE upgrading this SCOM Management Server and STOP & DISABLE them on the FIRST SCOM Management Server, which is already upgraded. DON’T FORGET THIS!!!
Now I start the SCOM services on the first SCOM Management Server and set them to start automatically. Also I start the SCOM website in IIS. Now the most crucial SCOM components are upgraded to SCOM 2016 RTM. There are no SCOM Gateway Servers in my environment to upgrade .
B 03: Upgrade – Install SCOM 2016 UR#01
Now it’s time to install SCOM 2016 UR#01. The order of it is like all other URs for SCOM 2012 R2:
- Management Servers;
- Gateway Servers (not applicable, since SCOM 2016 UR#01 doesn’t touch them);
- SCOM Console(s);
- SCOM Web Console;
- SQL query (SCOM 2016 UR#01 only touches the OpsMgr database);
- SCOM Management Packs;
- SCOM Agents (when running Gateways, copy the MSP files to their Agent staging folders!).
So I start with the first SCOM Management Server which also hosts the SCOM Web Console and Console. In this case these SCOM 2016 components are touched:
- SCOM Management Server;
- SCOM Web Console;
- SCOM Console.
After this I upgrade the second SCOM Management Server, also hosting the Console.
In this case these SCOM 2016 components are touched:
- SCOM Management Server;
- SCOM Console.
And now – except for the SCOM Agents that is (still on SCOM 2012 R2 UR#11 level) – SCOM is on 2016 UR#01 level.
C 01 – Upgrade – The Aftermath
Now it’s time to wrap it all up with these steps:
- Upgrade SCOM Agents to SCOM 2016 UR#01
Upgrade them in batches. When installed from the Console upgrade them from there as well. For all other manually installed Agents, update them manually or automated by GPO, SCCM or really manually.
- Install the license
Go from eval to retail by installing the SCOM license. Fun fact: The same SCOM 2012 R2 key is used by SCOM 2016 as well…
- Upgrade Helper MP
I used Wei H Lim’s excellent Upgrade Helper MP in order to ascertain myself EVERYTHING is on SCOM 2016 level. Both dashboards are viewed, so I am 100% sure all is okay:
Some Agents have issues, but they are on SCOM 2016 UR#01 level none the less.
- Remove the Upgrade Helper MP
When you’re 100% sure all components are on SCOM 2016 level, you can safely remove the Upgrade Helper MP.
Since SCOM 2016 isn’t that much of a change compared to SCOM 2012 R2, the upgrade is less likely to fail, moreover when you’re sure all components (underlying Windows Server OS and SQL instances included) meet the SCOM 2016 requirements AND you respect the pre-upgrade tasks.
Compared to all other SCOM upgrades I have done before this was the easiest one. None the less: PREPARATION is KEY!!!
Also: when your current SCOM 2012 R2 environment comes from SCOM 2012 SP1 or even older, THINK TWICE before upgrading to SCOM 2016. Changes are things will break during the upgrade, so seriously consider the along side scenario in which a new SCOM 2016 environment is rolled out alongside your current SCOM 2012 R2 environment and monitoring is gradually moved to the new SCOM 2016 UR#01 environment.