Tuesday, November 21, 2017

UR#4 SCOM 2016 Is Out!

For some weeks now, Update Rollup #4 for SCOM 2016 is available. KB4024941 tells you what’s fixed and known issues with this UR#4.

Even though the same KB contains installation instructions, I highly recommend to read Kevin Holman’s UR#4 installation instructions instead.

Verdict
This UR#4 doesn’t add many new features. The SCOM Web Console still requires Silverlight for some parts of it in order to function properly. And the SCOM Console itself still has some serious (performance) issues.

None the less, this UR#4 should be installed in any SCOM 2016 environment if only to keep it on a well maintained level.

Can’t wait until Microsoft finally starts delivering on the so much promised frequent continuous releases for the rest of the System Center stack, SCOM included. Hopefully by then the SCOM Web Console will outgrow the so much *loved* Silverlight dependency and will SCOM show the so much asked for (Console) performance enhancement…

Until then, any new Update Rollup won’t be that special at all…

I Am Back!

Partially that is, but I am getting there. So soon enough new posting will follow.

Monday, August 28, 2017

Out of order...

While mountain biking I had an accident during which I broke my clavicle. As such this blog will be silent for a while.

Rest assured, 'I'll be back' to quote a famous line of an equal famous movie.

Thursday, August 24, 2017

Mobile First–Cloud First’ Strategy – How About System Center – 06 – SCVMM


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
04 – SCDPM
05 – SCSM


In the sixth posting of this series I’ll write about how System Center Virtual Machine Manager (SCVMM) relates to Microsoft’s Mobile First – Cloud First strategy. Like SCOrch & SCSM I don’t think that SCVMM is going to the cloud…

SCVMM
First released in 2007 to enterprise customers only, it was targeted for managing large numbers of virtual servers based on Microsoft Virtual Server (yikes!) and later (Q2 2008), hypervisors based on Hyper-V. 

Since then VMM has grown into a product of its own, with every release new functionalities being added, whereas others were removed, for instance:

  1. P2V migration removed from SCVMM 2012 R2 onwards;
  2. Support for Citrix XenServer removed form SCVMM 2016;
  3. Creation and management for private clouds, added in SCCM 2012 R2.

Private Cloud
Let’s dwell a bit longer on the last item of the previous list, the creation and management of Private Clouds.

On October 18th 2013, Microsoft announced the General Availability of Windows Server 2012 R2 (Cloud OS) and System Center 2012 R2. All new here was the Private Cloud vision of Microsoft.

Back then Azure was still branded Windows Azure and offered IaaS mostly (VMs, storage) and PaaS (websites, SQL, Python SDK and Traffic Manager).

As such, the public cloud was limited in its reach and capabilities. None the less, Microsoft top brass envisioned everyone going to the cloud. First everyone would build their own cloud (Private Cloud) and later on, move it into the public cloud, like Azure.

In order to enable the private cloud based on Microsoft technologies, System Center 2012 R2 had to make it happen. And SCVMM 2012 R2 would be the enabler of everything, bringing compute, storage and networking together, abstracting it and offering it as ready to use/consume building blocks for the (internal) customers of the IT department.

Instead of having to worry about compute, storage, networking, connectivity, middle tiers (like SQL and web for instance), a business unit could provision itself with just as many web/SharePoint servers (for instance) as required, as long it fit into the amount of resources assigned to them, hence the private cloud.

All through a portal. In the back SCVMM would initiate the required workloads, using a library of images, additional software and configuration items. Also with deep integration with SCOM (to monitor the provisioned VMs and the underlying hypervisors) and SCOrch, SCVMM could rollout almost anything, no matter how many tiers required.

Simply because the moment a certain type of installation was out of the reach of SCVMM, one or more SCOrch runbook could take care of it, complete with the registration and handling of the required tickets in the service system like SCSM.

The promise and the reality
On itself it sounded awesome. Finally the fully automated data center was there. Just roll out a bunch of servers, network components and loads of storage and SCVMM would take care of the rest. Even bare metal deployment of new Hyper-V hosts could be handled by SCVMM!

So say goodbye to rogue IT and – as IT department – be back in control in such a way by really helping the organization forward and not by frustrating it. Now a business unit could allocate workloads as required, all with ‘nothing’ more of a self-service portal and the click of a mouse button…

However, reality turned out to be a ‘bit’ harsher. For instance, the maintenance of SCVMM can be quite a challenge, especially when the SCVMM ‘fabric’ (all the components and servers used for the SCVMM environment) consists out a lot of servers, combined with a huge SCVMM library.

Especially the latter can be a real pain to maintain and to keep healthy. Orphaned resources in the SCVMM library are a ‘special’ treat. And yet, without the library SCVMM is dead in the water. Other challenges with the library are (but not limited to…): The migration of the library to a new set of servers, or making it high available (and keeping it like that!), or upgrading it to the latest version.

All in all it made SCVMM quite a challenge to run and maintain. However, the end user experience wasn’t that good either. Like any other software, a good GUI is crucial. And somehow, Microsoft seems to have issues with providing good user interfaces, whether full blown or web based. SCVMM isn’t an exception here unfortunately…

Trial & error. Self-service portal SCVMM
From the beginning the self-service portal of SCVMM turned out to be a challenge to say the least. Slow, buggy, not covering all required aspects of a full blown self-service portal, you name it. The SCVMM self-service portal had it all.

SCVMM 2012 R2 ditched it completely and had it replaced by System Center 2012 R2 App Controller. This replacement also delivered (basic) integration with Azure, enabling moving VMs to Azure. Actually, you had to store the VM in App Controller, and then copy it to Azure using the Azure Transfer Wizard, incorporated in App Controller. Afterwards additional steps in Azure were also required. As a result the ‘integration’ between your private cloud and Azure became a joke.

Due to the lack of success App Controller is dropped in the System Center family. When still requiring a self-service portal wrapped around SCVMM 2016, Microsoft recommends Windows Azure Pack instead, which is another challenge on itself.

However, between System Center 2012 R2 and System Center 2016 the world evolved quickly. Also Microsoft. So the private cloud mantra was scuttled because the world didn’t embrace it to the extend as the Microsoft top brass back then anticipated. Time for another approach!

Say hello to hybrid cloud (and goodbye to private cloud)
Instead Microsoft noticed that companies weren’t ready to roll out many different hard to manage and maintain System Center components in order to build their own private cloud.

Sure, the SDD (software defined datacenter) is still something wanted by many companies, but it’s easier to build and to maintain using other hardware/software incorporated solutions. On top of it, many companies chose all together not to board the private cloud train since it didn’t bring them to where they wanted to get.

Instead companies started to look for more hybrid solutions where their data, applications and workloads are running in different environments, whether on-premise (100% datacenter or 20% private cloud, who cares?) and in the public cloud. As a result their workforce can work anytime, anywhere with any device (when the apps are right that is).

Hence, the hybrid cloud was born, with a far bigger life expectancy and future ahead then the (Microsoft) private cloud ever had. Main reason here is that the hybrid cloud is based on the needs and requirements of the customers themselves, whereas Microsoft’s private cloud was forced upon the very same customers by Microsoft HQ. And even Microsoft HQ can’t dictate the world what to do or what to use. I guess their blew a different wind back then at Microsoft HQ compared to the more recent years.

Hybrid cloud: Nail to the coffin of SCVMM
As a direct result the role of SCVMM has dwindled big time. It’s back to it’s original function as intended when introduced back in 2007: Managing large numbers of hypervisors, primarily based on Hyper-V Sure you can also manage – a bit that is – VMware hosts through vCenter with it, but you really don’t want to go there, believe me.

Which is good, because SCVMM 2012 R2 never really delivered in enabling the private cloud. To much of a pain to maintain, configure and the lot. Also limited use cases since per type of configured server you have to go through the process of building it and save the related configurations as profiles. Only doable and justifiable when you roll out tens to hundreds of servers like that. However, in the smaller IT shop undoable and unrealistic.

Verdict
Sure, in a hybrid scenario there are still valid use cases for SCVMM, as long as you’ve got a considerable amount of Hyper-V hosts running locally. However, when migrating/moving to the cloud, the use case of SCVMM lessens.

Yeah I know that Microsoft has released more information about delivering features and enhancements on a faster cadence for some System Center components, SCVMM included.

None the less, workloads will move more and more into the cloud. Whether VM based (IaaS) or service based (PaaS). As such the role of SCVMM will diminish over time. It will never get the role it had once with the GA of System Center 2012 R2 (enabling the private cloud).

Therefore with the move to Azure, SCVMM won’t stick and will shrink (at its best!) into a tool to manage Hyper-V based workloads running locally.

But when looking into how the Azure portal is growing into reach – in conjunction with Azure Automation Webhooks and Hybrid Workers – changes are that with the Azure Portal you’ll end up managing your local Hyper-V and perhaps even VMware hosts from the Azure portal.

With that the role of SCVMM will be downplayed even more and put into the role of an emergency tool when there is no internet connection, or the starting point of rolling out new Hyper-V hosts and the moment they’re online, management will taken over by the Azure portal.

Coming up next
In the seventh and last posting of this series I’ll write about SCOM (System Center Operations Manager). See you all next time.

Wednesday, August 23, 2017

Azure Under The Hood – 01 – A New Series

How stuff works
My whole life I want to know ‘how stuff works’. Just hitting a button and use a vacuum cleaner, dish washer, laptop, RC car, mobile or whatever, won’t ‘fly’ with me for a long time. Soon I’ll be prying, investigating and ‘researching’ the WHY something works and based on what principals.

Sure, it has cost me some childhood birthday presents (a radio I once got for my birthday was dismantled within a day and beyond repair…), but I always LEARNED from it. That attitude didn’t change when PCs came into my life, or better my father’s professional life.

Of course, the first months I kept a respectful distance and only used the PC (IBM PS/2) as allowed by my father, all the while keeping a keen eye at the big white case, wondering what magic was happening right there under my nose. So you can imagine how thrilled and happy I was when the PC broke down and the technician had to be called! Somehow my father wasn’t that happy about it…
Personal System 2 Series of Computers.png

I ascertained to be there when the technician came around to fix it. So when he opened the PC case I was just a few inches away, firing of questions, pointing out all the different parts in order to learn as much as possible. The PC got repaired and I learned a lot that day. Some years later I started to assemble my own PCs…

Azure & me
This attitude/curiosity hasn’t really changed over the years. No, I won’t break anything apart anymore in order to try to learn from it. Today there is Google, YouTube, Wikipedia and the lot. Saves me a lot of hassle and money as well. Sure, it takes a lot of the investigation fun away, but keeps me out of trouble as well.

But still. It gnaws at me when I use something without having a deeper understanding of it all. Same goes with Azure. Yes, I know what a computer is, what a network does, what a datacenter is for. But Azure is WAY MORE THAN THAT!

As such I’ve done a lot of investigation. Read a lot of books, online articles, watched many video’s and so on. Simply to gain a deeper understanding of what’s happening under the hood of Azure, or better when you’re clicking around in the Azure portal.

The funny thing is that Microsoft is quite secretive about it. Even towards MVPs they don’t share a lot. And when I found some information, I had to double check it, in order to know for a full 100% that I am not violating any NDA. When in doubt, I don’t share it.

The new series
As a result I’ve collected a lot of interesting non-NDA information all about Azure under the hood, to be shared with you out there. No, it won’t make you (nor me for that matter) an ‘Azure-Under-The-Hood-Expert’, but at least it will give you a better understanding of how Azure works.
Image result for cloud under the hood

In the time to come I’ll share that information. But please feel free to comment and send your own findings. I’ll use that as well and name you as well of course as the source.

See you all next time!

Microsoft By The Numbers

Bumped into this website by accident.
image

It shows how many people are using Microsoft products and services. The numbers are VERY impressive… And NO, the presentation of it all isn’t dull, like an Excel sheet (boring!) or long list (yawn!).

Instead it’s more like an animated infographic. Go here to see what I mean and be amazed. You can even download the related PowerPoint slide deck and use it.

Tuesday, August 22, 2017

Azure Managed Disks: How Azure VMs Are Moving To PaaS

Introduction
When implementing Azure VMs one is using Azure as an IaaS solution offering. At least this is how Microsoft introduced Azure VMs back in 2012. However, things are moving with a fast pace in IT and in todays cloud things are moving with lighting speed.

As such it’s time to take a new look at Azure VMs in order to know whether they still adhere to the IaaS cloud delivery model only, or that things have changed a ‘bit’.

Azure VMs as IaaS
Sure, when you opt for the ‘classic’ approach to roll out an Azure based VM, it’s IaaS at its best. You need to provision a Storage Account, perhaps even Diagnostics storage account for monitoring, a Virtual Network and so on. Let’s focus on the Storage Accounts here.

When rolling out Azure VMs in the classic manner you have to think about your Azure subscription limits, since per subscription one is only allowed a certain amount services and resources. For instance per Azure subscription one is ‘only’ allowed 200 Storage Accounts (default) with a maximum of 250 (requires contacting Microsoft Support).

Of course, you could use only ONE Storage Account for all your Azure based VMs. But that approach isn’t going to ‘fly’ since per Azure Storage Account there are limits as well, like 20,000 IOPS per Azure Storage Account. So when you ‘hook up’ too many Azure VMs to the same Azure Storage Account, the available IOPS per Azure VM will drop dramaticly, resulting in under performing VMs.

In an ideal world one would prefer to facilitate one Storage Account per Azure VM. However, when requiring 250+ VMs, this approach isn’t viable. Even when the total amount of Azure VMs stays well below the 250 mark, there are still quite a few reasons why not to use 1:1 (VM:Storage Account) approach.

As a result, deploying an Azure VM requires planning, preparations, guidance and administration afterwards. Without it, sooner or later your company will have serious problems with Azure VM resource allocation and the lot…

Azure VMs as IaaS++
How nice would it be to roll out Azure VMs without  the headache of managing storage accounts? Instead, Azure manages storage for you! In this case you only have to think about the type & size of the disks.

All of the above (and much more) is delivered by Azure Managed Disks.

So now we’re talking about a new kind of Azure VMs. Sure the Azure VMs themselves are still adhering to the IaaS cloud delivery model, BUT a very important component of that same Azure VM (the disks and underlying storage) has become a different ball game all together.

Instead of doing it all yourself, Azure manages it for you. So the disks – when using Azure Managed Disks that is – have become IaaS++ at least, perhaps even more like a PaaS solution? Of course, this ‘statement’ could result in a never ending discussion on semantics. Let’s not go there please.

But no matter how you look at it, Azure VMs with Azure Managed Disks have evolved the cloud IaaS delivery model in that respect to a whole new level.

Verdict
Azure VMs with Azure Managed Disks are the next level of how Azure can enlighten the regular burden of VM management and administration as a whole. It also brings Azure VMs as IaaS to a new level. One might say IaaS++ or even – the storage management that is when Azure Managed Disks is being used – as a PaaS cloud delivery model.

Should my company use Azure Managed Disks?
Good question! Before you make any decision it’s vital to know what Azure Managed Disks deliver and how their costs are stuctured.

For instance, Azure Managed Disks deliver better high availability out of the box. Simply because these disks are automatically placed in different storage units. So when one storage unit goes down, it won’t affect many VMs but one or a subset instead.

Also with Azure Managed Disks it’s much easier to copy an image across multiple storage accounts and so on.

On top of it all, there are two types of ‘flavors’ (AKA performance tiers) for Azure Managed Disks: Premium (SSD based) and Standard (HDD based).

Also good to know: With Azure Managed Disks you can create 10,000 Azure Managed Disks per subscription, per region and per storage type! For example, you can create up to 10,000 standard managed disks and also 10,000 premium managed disks in a subscription and in a region. As a result you can create thousands of VMs in a single subscription.

As you can see, there is much more to Azure Managed Disks, all of which has to be taken into account when making a decision.

Recommended resources
For a better understanding of this article I recommend to read these resources:

Monday, August 21, 2017

Azure Active Directory (AAD): Where Is My Data Stored?

Situation
A customer wants to use Azure Active Directory (AAD) but needs to know where the data (like user name, credentials and attributes) is stored. On itself a solid question. However, the answer wasn’t easily found. Or better, quite obscure.

The basics
Before the answer is found (and clarified) one most familiarize him/herself with some Azure ‘slang’. In this posting I limit myself to the ones related to this article.

  • Geo: Abbreviation for geography. At this  moment Azure is to be found in 13 geo’s and two more are announced (France & South Africa).
  • Region: Can be looked upon as one HUGE data center, hosting many Azure services. For instance, there is an Azure region in Amsterdam (Netherlands) and one in Dublin (Ireland)
  • Region Pair: Two directly connected Azure regions, placed within the same geography BUT located greater then 300 miles apart (when possible). An Azure Region Pair offers benefits like data residency (except for AAD…), Azure system update isolation, platform provided replication, physical isolation and region recovery order.

Example of a Geo, with its Azure Regions and Region Pair is Geo Europe. This Geo has two Azure Regions: one in Amsterdam (Netherlands), named West Europe and the other Azure Region located in Dublin (Ireland), named North Europe. Together they make up the Region Pair for Geo Europe.

Azure data storage location by default
By default most Azure services are deployed regionally, enabling the customer to specify the Azure Region where their customer data will be stored. This is the case for VMs, storage and Azure SQL databases.

So when you deploy a set of VMs in the Region West Europe with related storage, that data will be stored in Amsterdam (Netherlands). And yes, and some parts of that data will be replicated to North Europe as well since both Regions are part of the same Region Pair. Reasons for this replication might be of an operational nature and/or of data redundancy options selected by the customer.

This is as expected. However it get’s trickier…

USDS (United States of Data Storage)?
However, there ARE exceptions to the above. In quite a few cases customer data will be stored outside by the customer selected Region (and Region Pair as such).

For instance there are some Azure regional services like Azure RemoteApp, Microsoft Cognitive Services, Preview, beta, or other prerelease services and Azure Security Center which data may be transferred and stored globally by Microsoft. And many times it will end up (in some form) in the USA, or United States of Data Storage…

How about AAD?
AAD isn’t an Azure service offered locally, but is designed to run globally. Any Azure service designed to run globally, it doesn’t allow the customer to specify a certain Region where to store the data related to that same Azure service.

And again, Microsoft isn’t very clear about where that data is exactly stored: ‘…Azure Active Directory, which may store Active Directory data globally…’.

To make it even more confusing the same website states: ‘…This does not apply to Active Directory deployments in the United States (where Active Directory data is stored solely in the United States) and in Europe (where Active Directory data is stored in Europe or the United States)…’

Azure services which operate globally are:

  • Content Delivery Network (CDN);
  • Azure Active Directory (AAD);
  • Azure Multi-Factor Authentication (AMFA);
  • Services that provide global routing functions and do not themselves process or store customer data (Eg: Traffic Manager, Azure DNS).

Still not sure where AAD stores its data…
Because Microsoft is a bit elusive about where EXACTLY AAD data is stored, it’s better to look how AAD is made up technically. Many times the technicians don’t do politics Smile.

The article Understand Azure Active Directory architecture is quite recent and very informative. It tells about primary and secondary replicas used for storing AAD data. And the latter ones make it interesting: ‘…which (the secondary replicas) are at data centers that are physically located across different geographies...’.

Basically it tells me that AAD data is replicated globally. It will turn up in the USA (USDS) as well. As the matter of a fact, it will turn up in every Region servicing Office 365. Simply because without AAD there is no Office 365 consumption.

And for sure, the same article clarifies it even more with the header Data centers: ‘…Azure AD’s replicas are stored in datacenters located throughout the world…’.

Verdict
When using AAD you know for certain that user data (user names, credentials and meta data for instance) ARE replicated globally.

Do I need to worry?
That depends. Know however, that Microsoft goes to extreme lengths to secure your data. Physical access to their data centers is limited to a subset of highly screened people. On top of it all, Microsoft doesn’t allow governments and agencies to access customer data that easily.

And yes, Microsoft offers the Trusted Cloud. Looking at the sheer amount of certifications and data residency guarantees, you can rest assured that Microsoft does its outmost best to offer the most secure cloud services platform ever built.

Alternatives?
Sure, you can look for alternatives. Like Amazon AWS S3. However, the meta data related to those ‘buckets’, also containing customer data, isn’t guaranteed to stay at a certain location either…

Another approach could be using Azure Geo Azure Germany. Because of VERY strict privacy laws, the exceptions for data storage for regional and global Azure services DON’T apply…

Recommended resources
For a better understanding of this article I recommend to read these resources:


Cross Post: Speeding up OpsMgr Dashboards Based On The SQL Visualization Library

Dirk Brinkmann (Microsoft SCOM PFE, based in Germany) has posted an excellent article all about an easy (and undocumented) way to speed up the SCOM/OpsMgr dashboards bases on the SQL Visualization Library MP.

Go here to read all about it.

Thank you Dirk for sharing!

Largest Microsoft Ebook Giveaway!

Ever wanted to know anything about the latest Microsoft technologies, but were afraid to BUY an ebook because todays technologies are changing too fast? So what you buy today is outdated tomorrow?

Fear no longer! Simply download a FREE Microsoft ebook on the topic you want to know more about it and be done with it. Oh, and because it’s FREE why not download many more Microsoft ebooks?

Want to know more? Hunger for more knowledge? Looking for FREE ebooks, reference guides, Step-By-Step Guides, and other informational resources? Go here and be AMAZED, just like me.

A BIG thanks to Microsoft!

PDF: Overview of Microsoft Azure compliance

When you’re about to use Azure and want to know whether it’s compliant with the regulations your company has to met, I strongly advise you to download the PDF Microsoft Azure Compliance Offerings.

As Microsoft describes: ‘…Azure compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments, and customer guidance documents produced by Microsoft. Each offering description in this document provides an up to date scope statement indicating which Azure customer-facing services are in scope for the assessment, as well as links to downloadable resources to assist customers with their own compliance obligations. Azure compliance offerings are grouped into four segments: globally applicable, US government, industry specific, and region/country specific…’

Wednesday, July 26, 2017

Holiday

This blog will be silent for the next few weeks because I am going on holiday, enjoying my family to the fullest.
Image result for national lampoon's european vacation
(Picture from the movie ‘National Lampoon's European Vacation’)

After the holiday ‘I’ll be back’ with quite a few postings, like (but not limited to):

  • The 2 last postings for the series about the future of the System Center stack related to Microsoft’s  ‘Cloud & Mobile First’ strategy;
  • Quite a few postings about Azure (IaaS & management);
  • SCOM updates and the lot.

I wish everybody a nice holiday (if not already enjoying it) and see you all later.

Bye!

Thursday, July 20, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 05 – SCSM


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
04 – SCDPM


In the fifth posting of this series I’ll write about how System Center Service Manager (SCSM) relates to Microsoft’s Mobile First – Cloud First strategy. Like SCOrch I think that SCSM isn’t going to make it to the cloud…

Ever heard of Service Desk?
The very start of SCSM was a bumpy ride. Originally it was code-named Service Desk and was tested back in 2006, with the release scheduled somewhere in 2008. The beta release ran on 32-bits(!) version of Windows Server 2003, with IIS 6.0, some .NET Frame work versions (of course), SQL Server 2005 and SharePoint Server 2007 Enterprise.

Service Desk was really a beast. Terrible to install, a disaster to ‘run’ (it was slooooooooooooooow) and filled to the brim with bugs. Totally unworkable. Back then I was part of a test team which put the ‘latest & greatest’ of Microsoft’s products through its paces. The whole team was amazed about the pre-beta level of it. Never ever before we bumped into such crappy software. We even wondered whether we had received the proper beta bits…

So none of us was surprised when Microsoft pulled the plug on it and sent the developers back to their drawing boards. In the beginning of 2008 Microsoft officially announced it was delaying the release until 2010, because the beta release had performance and scalability issues. Duh!

Meanwhile a new name was agreed upon: Service Manager.

2010: Say hello to SCSM 2010
In 2010 the totally rewritten SCSM 2010 was publicly released at MMS, Las Vegas. For sure, the code base for SCSM 2010 was totally new, but somehow the developers had succeeded in bringing back some of the issues which plagued Service Desk: performance and scalability issues… Ouch!

Because SCSM 2010 was really the first version (totally rewritten code remember?) it missed out on a lot of functionality. As a result Microsoft quickly brought out Service Pack 1 for it, somewhere in the end of 2010. For SCSM 2010 SP1 in total 4 cumulative updates were published, alongside a few hotfixes.

From 2012x to 2016 in a nutshell
Sure with every new version (2012, 2012 SP1, 2012 R2 and 2016) the performance and scalability issues were partially addressed, but never they really disappeared. As a result SCSM has a track record for being slow and resource hungry. For SCSM 2016 Microsoft claims that data processing throughout has been increased by 4 times.

None the less, the requirements for SCSM 2016 are still something to be taken seriously. For instance, Microsoft recommends physical hosts, 8-core CPU’s and so on. The number of required systems can run over 10+(!), especially when you want to use Data Warehouse cubes and the lot. Even for enterprises this is quite an investment for just ONE tool.

Also with every new version, additional functionality was added. For instance, SCSM 2016 introduced a HTML based Self Service Portal. Unfortunately, the first version of that portal had some serious issues, most of them addressed in Update Rollup #2.

All in all, the evolution from SCSM 2010 up to SCSM 2016 UR#2 has been quite a bumpy ride with many challenges and issues.

Deep integration
Of course, SCSM offers a lot of good stuff as well. It’s just that SCSM is – IMHO – the component of the SC stack with the most challenges. One of the things I like about SCSM is the out-of-the-box integration with other tools and environments.

SCSM can integrate with AD, other System Center stack components (SCOM, SCCM, SCVMM and SCOrch). And – still in preview – you can use the IT Service Management Connector (ITSMC) in OMS Log Analytics to centrally monitor and manage work items in SCSM. As a result, the underlying CMDB is enriched with tons of additional information for the contained CI’s.

SCSM & Azure
At this moment – besides the earlier mentioned ITSMC in OMS – there are no other Azure Connectors available, made by Microsoft that is. There are some open source efforts, like the Azure Automation SCSM Connector on GitHub. But as far as I know, it isn’t fully functional.

Other companies like Gridpro and Cireson, are offering their solutions. But since these companies do have to earn a living as well, their solutions don’t come for free, adding additional costs to your SCSM investment. Still, some of their solutions resolve SCSM pain points for once and for all. So in many cases these products deserve at least a POC.

But still the Azure integration is limited. On top of it all, Microsoft itself doesn’t offer any Azure based SCSM alternatives. Azure Marketplace offers a few third party Service Management solutions (like Avensoft nService for instance) but none of them Microsoft based.

Of course, you could  install SCSM on Azure VMs, but shouldn’t since it’s a resource savvy product, which would bump up Azure consumption (and thus the monthly bill) BIG time.

No Roadmap?!
Until now Microsoft is pretty unclear about their future investments in SCSM. There is nowhere a roadmap to be found. So no one knows – outside Microsoft that is – what will happen with SCSM in the near future. Will there ever be a new version after SCSM 2016? I don’t know for sure. But the signs are the tell tale sign their won’t be…

ServiceNow
In the last years the online service management solution ServiceNow has seen an enormous push and growth. Not just in numbers but also in products and services.

Basically ServiceNow delivers – among tons of other things – SCSM functionality in the cloud. Fast, and reliable. It just works. Also it integrates with many environments, tools and the lot.

Verdict
SCSM has a troublesome codebase which isn’t easily converted to Azure without (again Smile) a required rewrite. When looking at where SCSM stands today, the reputation it has, I dare to say it’s end of the line for SCSM. No follow up in the cloud, nor a phased migration (like SCDPM or SCCM) to it.

Instead Microsoft is silent about the future of SCSM which on itself says a lot. One doesn’t need to speak in order to get the message across.

Combined with the power of ServiceNow, fully cloud based, it’s time to move on. When you don’t run SCSM now, stay away from it. Because anything you put into that CMDB must be migrated to another Service Management solution sooner or later. Instead it’s better to look for alternatives, using todays technologies to the fullest, like ServiceNow or Avensoft nService. For sure, there are other offerings as well. POC them and when they adhere to your company’s standards, use them.

When already running SCSM, upgrade it to the 2016 version. It has Mainstream Support till 11th of January 2022. Time enough to look out for alternatives, whether on-premise or in the cloud. Because SCSM won’t move to the cloud nor will Microsoft invest heavily in it like it did before it adopted their Mobile First – Cloud First strategy.

So don’t wait until it’s 2022, but move away from SCSM before that year, so you can do things on your own terms and speed, not dictated by an end-of-life date set for an already diminishing System Center stack component.

Coming up next
In the sixth posting of this series I’ll write about SCVMM (System Center Virtual Machine Manager). See you all next time.

Monday, July 17, 2017

Azure Stack and Azure Stack Development Kit Q&A

Since Azure Stack is GA, many questions have come forward. Not only about Azure Stack but also about Azure Stack Development Kit. I’ll do my best to answer most questions and refer to the online resources as well.

01: What’s Azure Stack?
As Microsoft states: ‘Microsoft Azure Stack is a hybrid cloud platform that lets you deliver Azure services from your organization’s datacenter…’. Still it sounds like marketing mumbo jumbo.

Basically it means that with Azure Stack your organization has the same Azure technology on-premise available, deeply integrated with the public Azure. Of course, Azure Stack doesn’t offer the same breadth and depth of services as the public Azure, but still it packs awesome cloud power. It’s to be expected that with future updates Azure Stack will offer more and more public Azure based services and technologies, based on the use cases and demands of existing Azure Stack customers.

And because Azure Stack and the public Azure use the same technologies, the end user experience is fully transparent. The same goes for the administration experience. So basically Azure Stack can be looked upon as an extension of Azure.

So yes, one could look at Azure Stack as a kind of private cloud which can be heavily tied into the public Azure, thus creating a super powered hybrid cloud. But there is more.

02: Does Azure Stack require a permanent connection with public Azure?
No, it doesn’t. You can run Azure Stack either in a Connected scenario or Disconnected scenario. In a Connected scenario Azure Stack has a permanent connection with the public Azure. In a Disconnected scenario, Azure Stack doesn’t have a permanent connection.

Even though the first scenario – Connected – makes the most sense, there are enough valid use cases for the Disconnected scenario as well. Think about area’s with a low internet connection density combined with a far away public Azure region. Or how about hospitals, embassies, military installations and bases? The kind of information kept and processed in places like those are valid use cases for the Disconnected scenario.

03: Why should companies use Azure Stack while public Azure offers more services and is more powerful?
Good question! Suppose you’ve got a production facility which generates HUGE amount of data. That data is processed, and the result sets are used further down the production line. In a public Azure setup it would require an enormous data pipeline to Azure in order to get that data across. And when processed, the result sets have to send back as well. Which is egress traffic = money. On top of it all there is latency since the data travels between the factories and Azure.

With Azure Stack, that data is processed locally (no data traffic costs since it’s local LAN, no WAN) and there is no to very small latency.

Another valid use case is app development. Here public Azure is used for development and Azure Stack is used for production, or vice versa.

Or how about sensitive data which – based on regulations and law – isn’t allowed to live in the public cloud? Now you can keep the data onsite (Azure Stack) and use apps living in the public Azure.

And these are just some of the valid use cases for Azure Stack. There are many more, believe me.

04: Does Azure Stack offer the same services as the public Azure?
No, it doesn’t. Which makes sense when you compare the size of an average Azure region compared to an Azure Stack Smile. However, as stated before, the amount of services offered by Azure Stack will grow in the future, based on customer demand and use-/business cases for Azure Stack.

For now(*) Azure Stack offers these foundational services:

  • Compute;
  • Storage;
  • Networking;
  • Key Vault.

On top of it, Azure Stack offers these PaaS services(*):

  • App Service
  • Azure Functions
  • SQL and MySQL databases

(*: This is per 10th of July 2017. Since Azure Stack is in constant development, changes are that the amount of services offered by Azure Stack will have changed over time. Please check Microsoft for the most recent updates and overview of services offered by Azure Stack.)

05: Can I download Azure Stack and install it on spare hardware I’ve got?
No, you can’t. Because Microsoft invests hard to offer you the same Azure experience (pay as you go, consume with no worries about the hardware and so on) with Azure Stack, they had to lock down the hardware on which Azure Stack runs.

Therefore Azure Stack is delivered as a whole package, hardware and software integrated into one. For now HPE, Dell EMC and Lenovo deliver Azure Stack with their own hardware. Soon other hardware vendors will follow suit.

06: So I can’t test drive it? How do I know whether Azure Stack works for me?
Sure you can test drive Azure Stack, POC it or use it as a developer environment. For this Microsoft has specifically developed Azure Stack Development Kit.

You can download it for free and install it on hardware of your choice. Of course there are some requirements to be met for this hardware, but still it’s up to you what vendor to use.

07: What’s Azure Stack Development Kit? Can I use it for production?
As Microsoft states: ‘…It’s a single-node version of Azure Stack, which you can use to evaluate and learn about Azure Stack. You can also use Azure Stack Development Kit as a developer environment, where you can develop using consistent APIs and tooling…’

As such Azure Stack Development Kit isn’t meant for production. It’s meant for POCs and stuff like that. Go here to learn more about it.

08: Do I need to pay for Azure Stack?
Sure you do. But the prices are lower compared to using the public Azure. Which makes sense because your company pays the hardware and operating costs. Check out this Microsoft Azure Packaging & Pricing Sheet (*) for more information.

(*: Please know this sheet will be updated in the future. As such, just Google for Microsoft Azure Packaging and Pricing Sheet and you’ll find the latest version of it.)

09: Is Azure Stack Development Kit free?
Yes, Azure Stack Development Kit itself is free. However, the moment you connect it to (one of) your Azure subscriptions and start moving on-premise workloads to the public Azure, you will be charged for it.

10: Do have some useful links for me?
Sure, hang on. Here are some useful links, all about Azure Stack and/or Azure Stack Development Kit:

Thursday, July 13, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 04 – SCDPM


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
05 – SCSM


In the fourth posting of this series I’ll write about how System Center Data Protection Manager (SCDPM) relates to Microsoft’s Mobile First – Cloud First strategy. Even though it’s a bit ‘clouded’ it’s pretty sure SCDPM will move to the cloud, one way or the other. But before I go there, let’s take a few steps back and take a look at SCDPM itself.

SCDPM
From the very first day it saw the light SCDPM was different compared to other backup products. For instance, Microsoft positioned it as a RESTORE product, not a backup product. By this Microsoft meant to say that as a SCDPM admin you could easily restore any Microsoft based workload, like SQL, Exchange, SharePoint and so on, WITHOUT having any (deep) understanding of the products involved.

Even though SCDPM’s usability was limited to Microsoft workloads, it offered a solution to the ever growing amount of data to be backed up with a never growing backup window: continuous backup!

Therefore SCDPM offered something new, if only a refreshed approach to the backup challenges faced by many companies back then.

Unfortunately Microsoft dropped the ball on SCDPM some years later on, because further development of new functionalities and capabilities was stopped.  As such it was overtaken by many other backup vendors, delivering improved implementations of continuous backup and easiness of restore jobs.

On top of it all, SCDPM kept it’s focus on Microsoft based workloads. Only for a short period SCDPM was capable of backing up VMware based VMs (SCDPM 2012 R2 UR#11), to be abandoned when SCDPM 2016 went RTM. Sure, one of the reasons being that the VMware components required to be installed on the SCDPM server to support VMware backup isn’t yet supported on Windows server 2016. None the less, the result is the same: SCDPM covers Microsoft based workloads only.

Combined it has led to an ever shrinking market for SCDPM. With Microsoft’s strong focus on Azure it looks like SCDPM is going to the cloud, one way or the other.

SCDPM & Azure
Valid backup strategies are vital for any company, whether working on-premise, in the cloud or hybrid. Therefore Azure offers different backup services, which are potentially confusing. Even more confusing because the starting point for consuming Azure backup services is the same.

It all starts with creating a Recovery Services Vault which is an online storage entity in Azure used to hold data such as backup copies, recovery points and backup policies. From there one can configure the backup of Azure or on-premise based workloads.

When choosing to backup on-premise based workloads there are three options to choose from:

  1. When you’re already using SCDPM, you have to download and install the Microsoft Azure Recovery Services (MARS) Agent:
    image
    The MARS Agent is installed on the SCDPM server. Now SCDPM will be extended from disk-2-disk backup to disk-2-disk-2-cloud backup. The on-premise backup will be used for short-term retention and Azure will be used for long-term retention.


  2. Of course, the MARS Agent can be used outside SCDPM as well, in which case you have to install and configure it separately on every server/workstation you want to protect. In bigger environments this creates enormous overhead.

    As such this approach should be avoided and is only viable in smaller environments where you have just a few on-premise laptops/workstations to protect and run everything else in the cloud (Azure/AWS).


  3. When you don’t use SCDPM, you have to download and install Microsoft Azure Backup Server (MABS) v2:
    image

    MABS is actually a FREE and customized version of SCDPM with support for both disk-2-disk backup for local copies and disk-2-disk-2-cloud backup for long term retention. And contrary to SCDPM, MABS supports the backup of VMware based VMs!

    Of course, the moment you start using Azure for long term retention, you’ve to pay for the storage used by your backups. And the moment you restore from Azure to on-premise or to Azure in another region, you have to pay for the egress traffic.

    On top of it, MABS requires a live Azure subscription. The moment the subscription is deactivated, MABS will stop functioning.


When using a Recovery Services Vault to backup Azure based workloads you can only backup Azure VMs, which is an extension to an Azure VM. This will cover the whole VM and all related disks to that VM. The backup will run only once a day and a restore can only be done at disk level.

Azure Site Recovery
And no, this isn’t everything there is. Another option is Azure Site Recovery.

As Microsoft states: ‘… (it) ensures business continuity by keeping your apps running on VMs and physical servers available if a site goes down. Site Recovery replicates workloads running on VMs and physical servers so that they remain available in a secondary location if the primary site isn't available. It recovers workloads to the primary site when it's up and running again…’

Too many choices to choose from?
As you can see, Azure offers different backup services, aimed at different scenario’s. Also SCDPM can be used together with Azure backup, turning SCDPM into a hybrid solution.

And SCDPM can be installed on an Azure VM and the same goes for MABS, enabling you backup cloud based workloads running on Azure based VMs.

Even more options to choose from! To make it even more confusing, Azure is in an ever state of (r)evolution. What’s lacking today, is in preview tomorrow and next week in production. The same goes for Azure backup services and Site Recovery.

Verdict
SCDPM is moving to the cloud. Or better, it has already arrived there. One way is using SCDPM in conjunction with the MARS Agent, another way is installing SCDPM on Azure based VMs. Or instead, using the revamped and customized free version of SCDPM, branded MABS. Which can be installed on-premise or on Azure based VMs.

So there are choices to be made. The right choice depends much on the type of workloads your company is running, combined with the location (on-premise, cloud or hybrid) and the Business Continuity and Disaster Recovery (BCDR) strategy in place.

On top of it, the moment of your decision is also important. Simply because Azure backup services are just like Azure itself, changing and growing by the month. This Microsoft Azure document webpage might aid you in making the right decision.

But no matter what the future might bring, one thing is for sure: SCDPM as a local on-premise entity is transforming more and more into a cloud based solution. Of course, when running on-premise or hybrid workloads, there will be a hard requirement for a small on-premise footprint. But more and more the logic, storage and management of it all will move into the cloud.

On top of it all, many backup options will be integrated more and more into specific services. As a result there won’t be 100% coverage offered by SCDPM or the Azure based backup services. In other cases there won’t be a out-of-the-box backup solution available at all. As a result third parties will jump into that gap, created by Microsoft.

A ‘shiny’ example is the backup of Office 365. Lacking by default and not in Microsoft’s pipeline, Veeam jumped into that gap by offering a solution made by them.

So at the end, the technical solution to your company’s BCDR strategy might turn into a hard to manage landscape of different point solutions instead of the ultimate Set & Forget single backup solution…

Coming up next
In the fifth posting of this series I’ll write about SCSM (System Center Service Manager). See you all next time.

Webinar: PowerShell Monitoring Management Pack

SquaredUp will present a webinar on the 19th of July in which they will release their PowerShell Monitoring Management Pack.

During that webinar the new MP will be demonstrated. The developers will take a technical deep-dive and show some examples of use cases for this MP.

With this MP we can put VB scripting behind us and focus us on the here, now AND future by using PowerShell workflows for SCOM monitoring!

And the price of the MP? SquaredUp is pretty clear about it: ‘…As part of our continuing commitment to the SCOM community we’re extremely excited to announce that we will be making a new PowerShell Monitoring Management Pack freely available to the community, available to download from our site and open-sourced via GitHub…’

Want to know more? Go here and signup for the webinar.

Wednesday, July 5, 2017

Cross-Post: Azure Stack Pre-GA Update

Mark Scholman posted an excellent article about the current status of Azure Stack, just before it becomes GA (General Available).

Since this article is an excellent write-up all about Azure Stack, I recommend anyone interested in it, in any kind of way, to read it.

Thanks Mark for sharing!

Azure Tip: How To Restore The Portal To Default

Bumped into this situation myself: I modified the ‘default’ Azure portal dashboard a little bit too much…

So I wanted to go back to the default layout. Took me some time to locate this option. When found I experienced a ‘duh’ moment. In order to save you the same embarrassment I decided to share this tip.

  1. When requiring to set the Azure portal default dashboard back to it’s original settings go to Portal Settings;
    image
  2. Hit the button Discard modifications. You’ll be shown this screen:
    image
  3. Select Yes. The Azure portal will freeze now for a couple of seconds;
  4. After the temporary freeze, the Azure portal will ‘restart’ like it’s the first time, including the Welcome screen:
    image
  5. Select the option you prefer and presto, the Azure portal default dashboard is back to it’s original layout.

Good to know when restoring default settings:

  • Your custom made OTHER dashboards won’t be affected. So they are retained;
  • The previously chosen theme will also be retained.

Tuesday, June 27, 2017

Cross Post: Alternative Logical Disk Space Monitors

Tim McFadden authored a MP containing two new logical disk space Monitors, only working on the PERCENTAGE of remaining disk space left. These Monitors generate a warning Alert at 10% logical disk free space and a critical Alert at 5% logical disk free space.

These two Monitors are a better approach compared to the over complex Monitors present in the Server OS MP.

As such I advice anyone running SCOM to take a look at this MP and how it’s configured. Yes, when imported, some additional one-time tasks are required. After that, it’s simply Set & Forget.

Go here for more information about this MP. A BIG thanks to Tim McFadden for providing this MP.

Cross Post: Exchange Server 2013 Extension MP

Volkan Coskun has written an extension MP for Exchange Server 2013. This MP discovers individual Mailbox databases (stand alone  or DAG)  and Transport Queues on Exchange servers.

This MP also contains a few Monitors:

  • Check Database Mount Status: Checks if DB is mounted or not
  • Mailbox Database LastAnyBackup Check: This is  a modified version of 2010 MP. In my script  I check both incremental and full backup and if any backup exist in configured period monitor is healthy.
  • Active Preference Check: This monitor will check if database is mounted on Active Preference 1  if database failovers to any other node  monitor will become warning.

The same MP contains 7 performance collection Rules:

  • Database size
  • Database Whitespace
  • Number of Mailboxes in Database
  • Local Mail Flow latency  ( Test-Mailflow )
  • Login Latency  (Uses Test-MAPIConnectivity)
  • Last full  backup age
  • Last incremental backup age

Haven’t tested this MP myself yet, but it looks promising. Of course, as it goes with ANY new MP: TEST it first before rolling it out in production.

Go here for more information about this MP.

New MP: Microsoft Azure Stack (MAS) MP, Version 1.0.1.0

Sometime ago Microsoft released a MP for monitoring the availability of the Microsoft Azure Stack (MAS), version 1.0.1.0. There are some things you must know however:

  1. This MP monitors the availability of the MAS infrastructure running MAS TP3;
  2. Yes, the MAS nodes are totally locked down, so there is no SCOM Agent involved here;
  3. Instead some MAS APIs are leveraged to remotely discover and collect instrumentation information, such as a Deployments, Regions and Alerts;
  4. Out of the box, this MP doesn’t do anything. After import additional actions are required;
  5. Concurrent monitoring of multiple regions has not been tested with this MP;
  6. For MAS deployments using AAD, the SCOM Management Server requires a connection with Azure. This can also be done from the system running the SCOM Console, used for configuring the MAS MP;
  7. .NET Framework 4.5 MUST be installed on all SCOM Management Servers and systems running the SCOM Console;
  8. The SSL certificate provided for the Microsoft Azure Stack deployment of Azure Resource Manager, must be installed in the Trusted Root Certificate Authority Store of all SCOM Management Servers and the computer(s) with the SCOM Console used for configuring the MAS MP;
  9. When SPN is used for authentication, the same certificate created along the SPN must be installed on all SCOM Management Servers and the computer(s) with the SCOM Console used for configuring the MAS MP;
  10. The account credentials which have Owner rights to the Default Provider Subscription of MAS (mostly the Azure Stack Service Administrator account) are required when configuring the MAS MP.

The MP and it’s related guide (PLEASE RTFM!!!) can be found here.

Monday, June 19, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 03 – SCOrch


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
04 – SCDPM
05 – SCSM


In the third posting of this series I’ll write about how System Center Orchestrator (SCOrch) relate to Microsoft’s Mobile First – Cloud First strategy. And as stated in the end of the second posting of this series, SCOrch isn’t in a good shape at all…

SCOrch
Sure, there is SCOrch 2016. And YES, it has the Mainstream Support End Date set on the 11th of January 2022, just like the whole SC 2016 stack. Also the Integration Packs for SCOrch 2016 are available for download. So on the outside all seems to be just fine. SCOrch is alive and kicking!

But wait! Hold your horses! Because here the earlier mentioned iceberg comes into play. Time to take a look what’s UNDER the water line, outside the regular view…
image

Yikes! x86 (32-bit) ONLY…
The day 64-bit workloads were special are long gone. All important Microsoft products and services are 64-bit based. Meaning, x86 (32-bits) isn’t default anymore. None the less, SCOrch 2016 is still x86 based and there aren’t any plans at Microsoft to rewrite the code to x64.

Therefore SCOrch native PowerShell (PS) execution runs in a V2.0, 32-bits PowerShell session, causing all kinds of issues. Sure, there are workarounds for it, but still they are workarounds.

Even though SCOrch packs some serious power, the x86 limitation is something to reckon with.

The ‘Engine’ & the ‘Graphical Editor’
These are crucial parts of any automation tool, SCOrch included:

  1. The ‘Engine’ enables the automation tooling to ‘translate’ the defined activities as stated in the runbook (eg. running a script, stopping a service, creating a folder, etc etc). SCOrch runs it’s own runbook engine, using it’s own proprietary runbook format.


  2. The ‘Graphical Editor’ allows for a ‘drag & drop’ experience when creating new runbooks/workflows (Eg: When the printer spooler service stops, restart it. Wait 2 minutes, check the state of the spooler service. When started, close the related Alert. When still not running create a ticket and escalate it to right tiers).

    SCOrch brought this ‘drag & drop’ experience to a whole new level because it doesn’t require any scripting. Just drag & drop the required activities – from the loaded integration packs - to your ‘canvas’, connect them as required, apply filters/criteria and so on and be ‘done’ with it. Of course, good Runbook authoring is far more complicated, all I am trying to do here to share the basics of how it’s done. The gest of this is to say that even without any scripting skills, one can build advanced runbooks with SCOrch.

However, things have moved on. In today’s world many times the on-premise/data center based workloads are connected to the cloud. Whether we’re talking Azure IaaS/PaaS/SaaS or Office 365 for instance here. Whenever automating management of cloud based workloads, PS is a hard requirement, whether you like it or not.

The challenges
And here SCOrch has two serious issues/flaws:

  1. By default SCOrch PS execution runs in 32-bits PowerShell session, missing out many advanced PS features introduced in the x64 editions;
  2. By default the SCOrch engine isn’t PS based.

As such, there will always be a translation from the native SCOrch engine to PS. On top of it, there will be ALSO a translation form x86 to x64 and vice versa…

And as it goes with every translation, there will be a performance penalty. Even worse, the whole chain (SCOrch > AA/SMA > targets to hit with a runbook/workflow) becomes longer and therefore more vulnerable to (human) errors. So why not cut out the ‘middle man’ or in this particular case, SCOrch and start directly with PS? Because SMA and AA both use an identical runbook format based on Windows PowerShell Workflow, x64 based.

No more translation, neither from a proprietary runbook format, nor from x86 PS execution to x64. Nice!

Port SCOrch to x64 and native PS?
For sure. Microsoft could solve it all by rewriting SCOrch in such a way that it would run natively x64 and use the identical runbook format based on Windows PowerShell Workflow. However, Microsoft isn’t going to do that.

Already in 2014(!) Microsoft was pretty clear about the ‘future’ of SCOrch. In 2015 Microsoft published the SCOrch Migration Toolkit (still in beta?!). Around the same date Microsoft also released the SCOrch Integration Modules, being converted SCOrch Integration Packs, ready for import in AA. In 2016 Microsoft published a blog posting about how to use the previously mentioned tools and modules.

And that’s about all the efforts Microsoft aimed at SCOrch specifically… Instead Microsoft tries to push you to AA or (in some cases) SMA, when using WAP. For most people however, AA is the future (at least Microsoft hopes).

Verdict for SCOrch and it’s future
Yes. SCOrch 2016 is available. And it still packs a lot of power. BUT at the end of the day, SCOrch 2016 is dead in the water. Not too many efforts, budget nor resources are allocated to it. Only the bare minimum. Sure it has gotten the 2016 boiler plate AND the related Integration Packs (IPs) are updated to support the 2016 Windows Server workloads. But that’s it.

Nothing new coming out of that door. End of the line for SCOrch 2016 after the 11th of January 2022. Even the recent posting of Microsoft about the new delivery model for the System Center stack is pretty clear about SCOrch: Not a single word about it. Which is a statement on it self.

What to do?
When not using SCOrch, but using other System Center 2016 components of the stack: Think twice. Sure, you already got the licenses for it. But please keep in mind that every effort and investment for SCOrch must be doubled: One time to get it into SCOrch and the second time to get it out to another automation tooling, no matter what you choose for.

When using SCOrch already, it’s time to look for alternatives. Also look OUTSIDE the Microsoft boundaries please. POC the alternatives and look at the possibilities to export the SCOrch based runbooks to your alternative choices. Also test the connectivity with the cloud and on-premise/datacenter based workloads. And TEST and EXPERIENCE how the graphical editors are functioning, how easy they are to operate and last but not least, how easy it is to catch errors and act upon them. AA still has some challenges to address, like the easiness of operation and capturing errors…

Coming up next
In the fourth posting of this series I’ll write about SCDPM (System Center Data Protection Manager). See you all next time.

Friday, June 16, 2017

!!!Hot News!!! Frequent, Continuous Releases Coming For System Center!!!

Wow! For some time Microsoft told their clients that one day the SCCM release cycle, also known as Current Branch (CB), would come (in one form or another) to the rest of the System Center stack.

And FINALLY Microsoft has released more information about how the System Center stack is going to adapt to a faster release cadence.

In a nutshell, this is going to happen:

  1. Microsoft will be delivering features and enhancements on a faster cadence in the next year;
  2. Main focus here will be on the highest priority needs of Microsoft’s customers across System Center components;
  3. There will be releases TWICE per year, in allignment with the Windows Server semi-annual channel;
  4. A technical preview release is planned in the fall with the first production version available early next calendar year;
  5. There will be subsequent releases approximately every six months;
  6. These releases will be available to System Center customers with active Software Assurance;
  7. SCCM/ConfigMgr will continue to offer three releases per year.

In the first release wave the main focus will be on three SC components:

  1. SCOM(!);
  2. SCDPM;
  3. SCVMM.

Key areas of investment will be:

  1. Support for Windows Server & Linux;
  2. Enhanced performance, usability & reliability;
  3. Extensibility with Azure-based security & management services.

What’s in the pipeline for SCOM specifically?

  1. Expanded HTML5 dashboards (FINALLY!!!);
  2. Enhancements in performance & usability;
  3. More integrations with Azure services (eg. integration with Azure Insight & Analytics Service Map);
  4. Improved monitoring for Linux using a FluentD agent.

On top of it all, YOU can influence the upcoming releases! Therefore Microsoft encourages you to join the System Center Tech Community and UserVoice forums to provide your feedback and suggestions.

Go here to read the posting I got all this information from. A BIG thanks to Peter Daalmans who pointed this posting out to me.

Recap
For me this is THE sign that Microsoft has FINALLY decided about the future of the System Center stack, by delivering insight in how they’re going to execute on their previously made promisses to port the SC release cycle more to the Current Branch (CB) model.

As such I expect the end of the notation like SC 2016. It makes sense to introduce a new naming scheme, like YYMM. Example: System Center 1806, refers to the SC release of June 2018. As a result I expect that there will be a new support model as well, just like the one in place for SCCM/ConfigMgr CB.

For now Microsoft is silent about it but to me it looks like the next logical step in it all. It makes no sense to support the new release cadence like the current SC 2016 with a Mainstream Support End Date. Even for a company like Microsoft, it would cost far too much money and resources, better used elsewhere (read: Azure Smile).

None the less, this development is a huge step forward and makes the future of the SC stack much more brighter. For sure, it doesn’t have an eternal live expectation. It never had. But at least there is something of a roadmap. And yes, one day the SC stack will be fully incorporated into Azure, which makes sense as well. But at least for now, Microsoft has recognized the significance of the SC stack.

Wednesday, June 14, 2017

SCOM 2016 Must Haves

Good to know:
This posting is based on the power of the community since it advices MPs, Best Practices and so on, all publicly available for free, shared under the motto:  ‘Sharing is caring’. So all credits should go to the people who made this possible. This posting is nothing but a referral to all content mentioned in this posting.

Why this posting?
’SCOM 2016 is just a little bit more complex compared to Notepad’ I many times say to my customers. Just trying to get the message across that even though SCOM packs quite awesome monitoring power, it still needs attention and knowledge in order to get the most out of it.

Even with the cloud in general and OMS to be more specific, SCOM still deservers it own place and delivers ROI for the years to come. And NO, OMS isn’t SCOM! Enough about that, time to move on…

None the less, everything making SCOM 2016 more robust and/or easier to maintain is a welcome effort. And not just that, but should be used to the fullest extend.

Therefore this posting in which I try to point out the best MPs, fixes, workarounds, tweaks & tricks all aimed at making your life as a SCOM admin more easier. Since content comes and goes, this posting will be updated when required.

I’ve grouped the topics in various area’s, trying to make more accessible for you. There is much to share, so let’s start.


01 – SCOM Web Console REPLACEMENT
Ouch! If there is a SCOM component I really dislike it’s the SCOM WEB Console. Why? It’s too slow, STILL has Silverlight dependencies (yikes!) and misses out on a lot of functionality. As such it’s quite dysfunctional and quite likely to become a BoS (Blob of Software) instead of a many times used SCOM component… Therefore, most of the times I simply don’t install it Smile.

Still, a FUNCTIONAL SCOM Web Console would be great. And when done right, it could be used as a replacement for the SCOM GUI (SCOM Console). But what to use? And when there’s an alternative, for what price?

Stop searching! The SCOM Web Console (and even SCOM GUI) alternative is already there! And yes, it’s a commercial solution. But wait! It has a FREE version, titled Community Edition! It’s HTML5 driven, taps into BOTH SCOM SQL databases, enabling the user to consume both data in ONE screen. So can look at current operational data and cross reference it with data contained in the Data Warehouse!

And not just that, but it’s FAST as well! And I mean REALLY fast!

For many users this product has become a full replacement for BOTH SCOM Consoles. As a result the SCOM GUI is only used for SCOM maintenance by the SCOM admins. The consumption of SCOM data, state information and alerts however is mostly done by using the HTML5 Console.

Yes, I am talking about SquaredUp here. Go here to check it out. Click on pricing to see the available versions, ranging from FREE(!) to Enterprise Application Monitoring.

Oh, and while you’re at it, check out their new Visual Application Discovery & Analysis (VADA) proposition, enabling end users(!) to automatically map the application topologies they’re responsible for, all in the matter of minutes!

Advise: Download the CE version and be amazed about how FAST and good a SCOM Console can be!


02 – Automating SCOM maintenance & checks
I know. The name implies SCOM 2012. But guess what? SCOM 2016 is based on SCOM 2012 R2. As such the MP I am about to advice works just fine in SCOM 2016 environments as well.

Whenever you’re running SCOM 2016 I strongly advise you to import AND tune the OpsMgr 2012 Self Maintenance MP. It helps you to automate many things AND is capable of preventing SCOM MS servers being put into Maintenance Mode (MM). When that happens (and the MP is properly configured!), this MP will remove these SCOM MS servers from MM! Also it’s capable of exporting ALL MPs on a regular basis and keep an archive of these exports for just as many days you prefer.

Please know that ONLY importing this MP won’t do. It requires some tuning, otherwise nothing will happen. Gladly Tao Yang (the person who made this MP) provided a well written guide, explaining EVERYTHING! So RTFM is key here.

Advise: This MP is a MUST for any SCOM 2016 environment. Import and TUNE it.


03 – Prevent SCOM Health Service Restarts (on monitored Windows servers)
The name I am about to mention is of a person who has made SCOM a far more better product then it ever was. Without his efforts, time and investments SCOM would be far more of a challenge to master.

Yeah, I am talking about Kevin Holman. For anyone working with SCOM he doesn’t need any introduction. One of his postings is all about unnecessary restarts of the SCOM Health Service, the very heart of every SCOM Agent installed on any monitored Windows based system.

The same posting refers to TechNet Gallery containing a MP, addressing the causes of this nagging issue. Please RTFM his posting FIRST before importing the MP. As such you’ll differentiate yourself from the monkey in the zoo pushing a button in order to get a banana without ever understanding the mechanisms behind it…

Advise: Import this MP in EVERY SCOM 2016 environment you own.


04 – Registry tweaks for SCOM MS servers
And yes, he also wrote a posting about recommended registry tweaks for SCOM 2016 Management Servers. And YES, he also provided the commands in order to rollout those tweaks.

Again: RTFM first before applying them. Alternative: Press the button and be amazed when a banana appears out of thin air Smile

Advise: Make sure to run these registry tweaks on ALL your SCOM 2016 Management Servers.


05 – SQL RunAs Addendum MP
Like I already stated, we – the SCOM users – own one man in particular a lot of thanks, even when he doesn’t want to hear about it. So it’s the same person here as well we’re talking about.

Until now I haven’t seen any SCOM environment NOT monitoring SQL instances. The SQL MP delivers tons of good information and actionable Alerts on top of it. As such, the SQL MP is imported and configured. The latter WAS quite a challenge, all about making sure SCOM has enough permissions to monitor the SQL instances.

Luckily this difficulty is addressed with the SQL RunAs Addendum MP. Again RTFM! But when read, import the MP and be amazed! Sure, this MP came to be with the effort of many people, so a BIG word of thanks to all the people involved here.

Advice: IMPORT this MP and USE it! It makes your life much easier and saves you lots of time, to be used elsewhere.


06 – Agent Management Pack (MP)
Sure. When SCOM monitors something a Management Pack is required. Without it, NO monitoring. Period. But still, the SCOM Agent running on the monitored Windows Server is also crucial. So all available information on those very same SCOM Agents is welcome, combined with some smart tasks in order to triage or remedy common issues.

Therefore it’s too bad that SCOM out of the box, lacks many of those things. Sure the basics are covered, but that leaves a lot of ground uncovered.

Gladly, a community based MP solves this issue. Again RTFM first before importing this MP, to be found here.

Advice: RTFM, import this MP and soon you’ll find wondering yourself how you ever got along WITHOUT it.


07 – Enable proxy on SCOM Agents as default
Whenever SCOM wants to monitor workloads living outside the boundaries of a server (like SQL, AD and so on) it has to look ‘outside’ that same Windows server. By default the SCOM Agent isn’t allowed to do that, because of security reasons.

Sure, people can hack into anything. But to think that a hacker would impersonate a SCOM Health Service workload, is something else all together. Why? Well the moment a hacker is already that deep into your network changes are far more likely he/she will have found something far more lucrative AND easier to grasp.

None the less, by default the SCOM Agent proxy is disabled by default. Sure, you can enable the Agent Proxy with a scheduled script. But when you’re already applying that workaround (that’s what it is…), why not change the source instead and be done with it?

Go here and follow the advice and apply the scripts. From that moment on the SCOM Agent proxy is ENABLED by DEFAULT. Problem solved. Next!

Advice: Enable the SCOM Agent proxy and forget about it Smile.


08 – SCOM 2016 System Center Core Monitoring Fix
The System Center Core MP from SCOM 2016 (up to UR#3!) contains some issues, as stated by Lerun on TechNet Gallery: ‘…temporary fix for rules and monitors in the System Center Core Monitoring MP shipped with SCOM 2016 (UR3). Issues arise when using WinRM to extract WMI information for some configurations. The issue is reported to Microsoft, though until they make a fix this is the only workaround except from disabling them…’

RTFM his description and import the MP from TechNet Gallery.

Advice: Import this MP and forget about this issue.


09 – SCOM Health Check Report V3
Okay. This MP is written when SCOM 2016 was only a dream. But still this MP works with SCOM 2016. Again RFTM is required here. But again, the guide tells you all there is to know and to DO before importing this MP.

This MP gives you great insight into the health of your SCOM environment and is made by people I highly respect (Pete Zerger and Oskar Landman). Download the MP AND the guide from TechNet Gallery, RTFM the guide, do as stated in the guide, import the MP and be amazed about the tons of worthwhile insights you get.

Advice: Is the MP already in place? If not, please do so now Smile.


As you can see, for now there are 9(!) tweaks, advices, MPs and so on all enabling you to have a better life with SCOM 2016. Feel free to share your experiences, best practices, tweaks and so on.

When double checked, I’ll update this posting accordingly with your name as well of course!