Thursday, March 28, 2013

SCOM 2012 Sizing Helper: Updated Or Not?

Got this question from a couple of readers of my blog. They found this webpage of the Download Center of Microsoft.

Besides the MP for monitoring JEE (Java Enterprise Edition), it also offers the SCOM 2012 Sizing Helper. This is a good tool and I use it every time when I have to design a new OM12 environment or run a health check for an existing OM12 environment.

They asked me whether it’s an updated version or the ‘old’ version, 1.0. So I downloaded it, checked some items and IMHO, it’s still version 1.0:
image

And:
image

On itself nothing wrong here since it’s simply a good tool. And a long time ago – before I worked in IT – I learned this: Better is the enemy of good. In other words: when something is good and functional why replace it?

Updated MP: Monitoring JEE (Java Enterprise Edition) Application Servers

Some days ago Microsoft released the updated MP for monitoring JEE (Java Enterprise Edition) application servers, based on IBM WebSphere, Oracle WebLogic, Red Hat JBoss and Apache Tomcat.

Features of this MP:

  • Enables automatic discovery of Tomcat/JBoss/WebSphere/WebLogic application servers deployed in customer's environment.
  • Monitors application server availability and performance.
  • Discovers and monitors applications deployed in your Java EE application servers.

MP can be downloaded from here.

SCOM Certificate Enrollment Issue

2013-03-29 Update
Pete Zerger has a good comment on this posting: It’s not required to lower the security for the entire zone in IE. Already in 2009(!) Pete blogged about this issue and how to solve it. His posting is to be found
here. Thanks Pete for sharing. Comments like this make this blog even better. Awesome!

When the SCOM Certificate template is created as described in this posting of mine and you want to submit a certificate request on the CA website (http://localhost/certsrv) you might get two messages which frustrate the request:

  1. In order to complete certificate enrollment, the Web site for the CA must be configured to use HTTPS authentication:
    image
  2. Internet Explorer blocked an ActiveX control, so this page might not display correctly:
    image

How to solve this without using HTTPS authentication? Simply because sometimes companies don’t want the SSL hassle for their PKI which is only used on a small scale inside their IT environment and not for production with CRLs and the lot. Because in that case it’s highly recommended you use SSL in order to keep your PKI secure and locked down.

Workaround
In order to submit a certificate request which runs successfully without enabling SSL, you simply follow these four steps:

  1. Start IE with elevated permissions and surf to http://localhost/certsrv;
  2. Go to Internet Options > Security > Local Intranet > Sites > Advanced > Add this website to the zone http://localhost/certsrv > Add > Close > OK;
    image
  3. Set the Security level for this zone to Low:
    image
  4. > Apply > OK. Restart IE, again with ELEVATED permissions.

Now you can submit your certificate requests successfully after you answer these two dialogs positively:
image
> Yes.

And:
image
> Yes.

Wednesday, March 27, 2013

New Book: Microsoft System Center Virtual Machine Manager 2012 Cookbook

A fellow MVP Alessandro Cardoso recently published his book, Microsoft System Center Virtual Machine Manager 2012 Cookbook.
image

Taken from his blog:

What you will learn

  • VMM Architecture and planning for a real word deployment
  • Network Virtualization, Gateway integration, Storage integration, Resource Throttling, Availability options
  • Operations Manager (SCOM) deployment and integration with VMM
  • SC APP Controller (SCAC) deployment and integration with VMM to manage Private and Public Cloud (Azure)
  • Cluster deployment with VMM Bare Metal
  • Create and deploying Virtual Machines from Templates
  • Deploy an High Available VMM Management Server
  • Managing Hyper-V, Vmware and Citrix from VMM
  • Upgrade from SCVMM 2008R2 to SCVMM 2012 SP1
  • Monitoring VMware with Veeam Management Pack

Currently, no other book offers thorough coverage of System Center VMM 2012 SP1 and integration with App Controller and SCOM.

Anyone involved with VMM should buy this book which can purchased at Packt Publishing or Amazon and many other selling points.

Monday, March 25, 2013

Active Directory MP Issue: Topologies Stay Unmonitored

Bumped into this situation recently. The customer had two OM12 environments in place: one for testing purposes and another for production.

Situation
In both environments the AD MP (version 6.8070.0) was imported, configured and tuned. In the test environment the MP worked fine and the Topology Views were populated and had a health status to the top level entity.
image

However in Production the Topology Views were also displayed but showed an unmonitored health status. Which puzzled me since there were no errors to be found in the OM12 Console related to the AD MP. I also checked al DCs and their OpsMgr event logs. No errors nor warnings to be found there. The OpsMgr Agent on all DCs was in top notch condition and all MPs and their scripts properly executed.

On all DCs – besides the OM12 Agent – the AD MP Helper Object (OOMADs.msi) was present and functional , so no issues there as well.

The ONLY difference
Since the production environment has a trust in place with another forest and that forest is already covered by other monitoring mechanisms, I had enabled the option ‘Agent-Only Discovery’  as described in the AD MP guide, pages 30 and 31.

This way only the DCs of the forest where the OM12 MS servers reside will be discovered and monitored by the AD MP. So no noise. I have used this approach in many other similar environments and have had no issues at all.

Resolution
As a test I removed the override for Agent-Only Discovery. Bounced the Health Services on all DCs and OM12 MS servers as well (the OM12 MS servers are responsible for Health State calculations) and presto, within 5 minutes all topology views got a health status which rolled up to the top level entity!

Quick Guide: Installing Ubuntu For X-Plat Demo Purposes

Last week I added an Ubuntu based server in my OM12 test environment. I needed a demo where a Linux server is monitored by OM12. This posting is a high level overview of all the steps involved.

There is much to tell, so let’s start.

Part 1: Obtain, install Ubuntu and configure it
This part costed me a few hours in the evening to get it right. I installed Ubuntu Server about 5 times. The installer is really user friendly, but I wanted to gain some non-Microsoft OS installation experiences. Also I didn’t like the absence of a GUI so I experimented with some of the different flavors available. Finally I opted for the Gnome full desktop environment.

  1. Download the correct version and architecture of Ubuntu Server from here. I downloaded Ubuntu Server 12.04.2 LTS since that version is fully supported with OM12 SP1. For the flavor I opted for 64 bit.
  2. In Hyper-V I created a new VM with 1 CPU and 1024 MB of RAM. Added the default NIC I use for all VMs and connected the dvd-drive to the ISO image I downloaded in Step 1;
  3. After the VM was created successfully I started it and ran the installation. This installation ran within 10 minutes! Important is to have a working internet connection for that VM since some components are downloaded during installation;
  4. During the installation follow the wizard and answer all questions. Also create an account with a password;
  5. When the installation is finished, the server reboots and within a minute the server is up and running. Enter the credentials you created in Step 4 and you’re in!
  6. Since I am a Windows guy I missed the GUI already. After some testing I decided to use the Gnome Full Desktop. This Ubuntu Wiki tells you how to install different GUI’s on Ubuntu server. Of course there are many reasons why NOT to install a GUI but I wanted a GUI none the less. Finally I gave this command: sudo apt-get install ubuntu-desktop. This will take a while since all related components are downloaded from the internet and installed afterwards. When the download and installation is finished, simply reboot the server. When it comes back online the GUI is started automatically and you can log on with the same set of credentials created in Step 4.

So now we have an up and running Ubuntu server. Our final goal is to monitor it with OM12 SP1 UR#1. So now we have to configure OM12 accordingly.

Part 2: Prepping OM12
Even though OM12 is capable of monitoring UNIX/Linux systems, there is some preparation required. Otherwise it simply won’t work.

  1. Bring OM12 to most current level, which is at this moment SP1 and UR#1;
  2. Also download the latest UNIX/Linux MPs available for OM12 SP1 UR#1, to be found here. Run the installer on an OM12 MS server;
  3. Now it’s time to import the required MPs. They reside on several locations:
    1. Installation media OM12 SP1
      Go to the folder ~\ManagementPacks and import these four MPs:
      - Microsoft.Linux.Universal.Library.mp;
      - Microsoft.Linux.Universal.Monitoring.mp;
      - Microsoft.Linux.UniversalR.1.mpb (Oracle and CentOS Linux distributions);
      - Microsoft.Linux.UniversalD.1.mpb (Debian and Ubuntu).
      image
    2. Updated Linux MPs as stated in Step 2
      Go to the folder C:\Program Files (x86)\System Center Management Packs\System Center 2012 MPs for UNIX and Linux and select this MP:
      - Microsoft.Linux.Library.mp.

      Go to the folder C:\Program Files (x86)\System Center Management Packs\System Center 2012 MPs for UNIX and Linux\Microsoft.Unix.Library\2012 SP1 and select this MP:
      - Microsoft.Unix.Library.mp.
  4. The MPs used for monitoring UNIX/Linux systems also contains the Agents which are installed on those UNIX/Linux servers. So when the MPs are imported, the related Agents are ‘extracted’ and published in the folder ~:\Program Files\System Center 2012\Operations Manager\Server\AgentManagement \UnixAgents\DownloadedKits of all your Management Servers.

    In order to work the Agent Action account requires admin permissions on the OM12 Management Servers. So  make sure this account has sufficient permissions. This can also take some time during which the HealthService.exe process might consume more CPU time than usual. So keep this in mind before importing the UNIX/Linux MPs.
  5. Kevin has written an excellent posting all about monitoring UNIX/Linux with OM12. Go here and:
    1. Create a Resource Pool for UNIX/Linux monitoring;
    2. Configure the Xplat certificates for all OM12 MS servers;
    3. Create and configure the Run As accounts for UNIX/Linux;
    4. and don’t go to the step Discover and deploy Agents since the Ubuntu server requires a bit more magic Smile.
  6. Make sure the name of the Ubuntu box is properly resolved to the IP address. Only use DNS for it when the FQDN matches the FQDN of the Ubuntu box. In my case this didn’t work. The host record resolved the FQDN ubuntu.sc.local while the the Ubuntu box only has the name ubuntu. This will frustrate the automatic creation of the certificate since the FQDN doesn’t match the name of the UNIX\Linux server. In my case I ended up removing the host record from DNS and adding the entry in the hosts file of both OM12 MS servers. Then I flushed the dns cache on both OM12 MS servers. Afterwards the deployment of the Agent to the Ubuntu box went fine.

So now OM12 is prepped and ready. Now we need to spend a bit more time on the Ubuntu server in order to get it totally right so we know it will be monitored by OM12.

Part 3: Prepping Ubuntu
Ubuntu requires some additional attention. SSH (Secure SHell) must be installed and initialized (happens automatically). I had some issues with installing the SCOM 2012 Agent on the Ubuntu server because superuser privileges were required. Therefor I activated the root account (disabled by default).

I know enabling the root account isn’t best practice but hey, I needed a demo! So this isn’t production at all and I don’t know much about Ubuntu. Also the evenings are short of time already so I took this shortcut. In production environments there are system engineers with deep UNIX/Linux experience available so they know how to go about it. Don’t hesitate to get them involved since their knowledge and experience will be required.

  1. Install SSH
    This blog posting tells it all.
  2. Enabling the root account
    This webpage shows how to do that. Search for ‘Enabling the root account’. In my demo I gave it the same password as I use for my own account on the Ubuntu box.

Part 4: Discovering and deploying the Agent to the Ubuntu box
Now everything is in place for the last phase, deploying the Agent on the Ubuntu box.

The earlier mentioned posting written by Kevin Holman contains good information about it, and follow the steps described in the procedure Discover and deploy the agents.

Within a couple of minutes my Ubuntu box was discovered, the OM12 Agent installed on it and monitored by OM12:
image

image

image

Helpful resources
Back in the days of SCOM R2 there weren’t many resources to be found about deploying Agents to UNIX/Linux servers. Much has changed for the better nowadays. These are the resources I used:

  1. A Wiki, written by Microsoft all about troubleshooting UNIX/Linux Agent Discovery in OM12
    http://social.technet.microsoft.com/wiki/contents/articles/4966.troubleshooting-unixlinux-agent-discovery-in-system-center-2012-operations-manager.aspx;
  2. How to Configure sudo Elevation and SSH Keys
    http://technet.microsoft.com/en-us/library/hh230690.aspx;
  3. How to Configure Run As Accounts and Profiles for UNIX and Linux Access
    http://technet.microsoft.com/en-us/library/hh212926.aspx
  4. The posting written by Kevin Holman
    http://blogs.technet.com/b/kevinholman/archive/2012/03/18/deploying-unix-linux-agents-using-opsmgr-2012.aspx
  5. A posting written by Robbie Roberts
    http://robbieroberts.wordpress.com/2012/12/31/scom-2012create-linux-privileged-user-account/

Recap
Microsoft has done a great job by deciding to monitor non-Microsoft platforms in SCOM R2 and has made it better in OM12 and OM12 SP1 as well.

Also their decision to make the SCOM R2/OM12 Agent on UNIX/Linux servers light weight (the execution of the MPs for UNIX/Linux monitoring happens on the OM12 Management Servers which are part of the Resource Pool used for UNIX/Linux monitoring, compared to the Agents used for Windows Servers) has helped the overall adoption rate of SCOM R2/OM12 for monitoring UNIX/Linux systems in a great way.

IMHO this is a great feature of OM12 and I am happy to see that with every iteration of OM12 this functionality is extended even more. Well done Microsoft!

Thursday, March 21, 2013

OM12 SP1 Installation Bug Workaround

In the installer of OM12 SP1 there is an annoying bug to be found: the Data Warehouse database is placed in the wrong folder and gets a wrong name as well. When running out of luck it might even kill the installation! I already blogged how to solve it.

But when running an OM12 SP1 installation there is already enough to be done. So wouldn’t it be nice to get the installer do it’s job right from the beginning without needing to patch it?

Gladly there is a workaround for it, fooling the installer so it does its job right! Take a look on this blog. It works like a charm and saves one a lot of hassle.

Credits
All credits go to Matt Boudro. Thanks for sharing Matt!

OM12 SP1 Q & A: Meet The Alert Attachment MP

OM12 SP1 is capable of sending out Alerts with additional information attached to it. Since I got many questions about it I have written this posting hoping to clarify the ramifications of this new capability.

Q & A
There is much to share, so let’s start.

  1. Is this functionality present by default in SCOM 2012 SP1?
    No, you’ll need to import a MP for it, the Alert Attachment MP (Microsoft.SystemCenter.AlertAttachment) found on the installation media of SCOM 2012 SP1 (~:\ManagementPacks). Good to know is that UR#1 for SCOM 2012 SP1 contains an updated version of this MP.

  2. When this MP is imported I am all set?
    No, you aren’t. First of all you’ll need to configure the MP as described in this TechNet article. Also you’ll need to create a dedicated file share. The same TechNet article tells it all.

    (The same procedure is also to be found in the document mentioned in QA item 6, on page 517 and further.)

  3. OK, I have configured the MP and setup the file share. Now ALL the Alerts will have additional information attached to it?
    No, the new capability doesn’t mean every Alert will have additional information attached to it. As stated here: ‘…Some management packs for Operations Manager attach additional information to alerts…’

  4. Can you tell me what MPs these might be?
    Unfortunately Microsoft isn’t very clear about it. But my educated guess is that it’s targeted at APM (Application Performance Monitoring) and GSM (Global Service Monitor) web tests, since the same TechNet article, under Security Notes states:

    ‘…This will enable integration with Team Foundation Server (TFS), IntelliTrace Historical Profiling, sharing Application Performance Monitoring events with developers, Global Service Monitor web tests, and any other scenarios that require files to be associated with Operations Manager alerts…’

  5. So I need to configure Team Foundation Server (TFS) as well? And how?
    Yes, you’ll need to configure TFS and a TFS related item in SCOM 2012 SP1 as well.

    The document SC2012_OpsMgr_Operations describes it in more detail, found under the header How to Configure Whether Files Are Attached or Linked in Synchronization TFS in System Center 2012 SP1 (in the PDF file from page 501 and further).

    Document SC2012_OpsMgr_Operations (in DOCX and PDF formats) to be found here.

  6. So basically I can’t select what Alert will have additional information attached to it?
    That’s correct. This new capability is aimed at APM in particular where developers need more information for good troubleshooting, like an IntelliTrace Historical Profiling snapshot which might be several hundred megabytes in size.

  7. I don’t use APM nor GSM. So changes are the Alert Attachment MP isn’t meant for me?
    At this moment that’s correct. However, it’s hard to say what the future will bring. None the less, many times the Alert itself contains sufficient information to solve the root cause. So why introduce more noise to it?

Hopefully this posting answered most questions related to the Alert Attachment MP. Whenever you have comments or more questions about this MP, please let me know and I’ll update this posting accordingly.

Wednesday, March 20, 2013

OM12 SP1 Update Rollup #1 Manual Installation: RTFM Is Key But Not Enough…

With all the System Center 2012 components one has the option to have them automatically updated through the Microsoft Update mechanism. It may sound great (Set & Forget) but there is more to it than meets the eye. So be careful here.

Why?

Good question! First of all when looking at OM12 SP1 and the first Update Rollup (UR#1) there are some things which must be done manually. Otherwise UR#1 won’t land completely. For instance, a new Management Pack Bundle (MPB) has to be imported. Also – after the OM12 Web Console is updated – a related web.config file requires some attention.

Both of these actions WON’T be performed by Windows Update. Also I want to know for SURE the related UR landed properly. Simply because assumptions are the mother of all…

How to manually apply UR#1 for OM12?
Gladly there is enough information out there. Microsoft has put much effort in it, like KB2785682 describing UR#1 for SC 2012 in more detail, also containing very important information about UR#1 for OM12. So RTFM it is.

However, some of the information isn’t spot on or might put you on the wrong track. This posting is meant to fill those gaps and therefore NOT MEANT AS A REPLACEMENT of the earlier mentioned KB article. Use this posting together with the KB article and you’ll be fine.

First things first
Let’s start at the beginning and work from there.

  1. RTFM KB2785682.
    This KB article contains tons of useful information, so RTFM is key. When required read it twice so you’ll know what to expect. The paragraphs Issues that are fixed in Update Rollup 1 and Installation instructions for Operations Manager need special attention here.

  2. Download the UR#1 files for OM12
    Microsoft Update Catalog offers those files:
    image
    Download them and EXTRACT them since these are cabinet (CAB) files, containing the related MSP files.

  3. RTFM again
    Pay close attention to the Known issues for this update and Installation notes sections of the paragraph Installation instructions for Operations Manager. This way you’ll know what to expect and how this UR will ‘behave’.

  4. What OM12 Components are touched?
    Even though the related KB article might make you think there is an update in UR#1 for the OM12 Reporting component as well, this isn’t true. It only tells you what the upgrade order is. The OM12 components which are touched by UR#1 are:
    1. Management Servers;
    2. Web Console;
    3. Gateway Servers;
    4. Agents;
    5. And a new MPB.

  5. Installing the MSP files
    Always install the MSP files from an elevated cmd prompt. Saves you a lot of hassle. Use this syntax: msiexec.exe /update <NAME OF MSP FILE>. Use the order as described in the KB article.

  6. RTFM…
    Even though this UR contains two MPBs for OM12 SP1, ONLY ONE NEEDS TO BE IMPORTED. The related KB article is clear about it, so make sure you import the correct MPB file, Microsoft.SystemCenter.AlertAttachment.mpb.

    However, the KB article gives the wrong location. The correct location where to find this MPB file is: ~:\Program Files\System Center 2012\Operations Manager\Server\Management Packs for Update Rollups.

  7. Check and double check
    The file version of UR#1 for OM12 is 7.0.9538.1005. For the OM12 Management Server component, two DLL files are updated: Microsoft.EnterpriseManagement.Modules.PowerShell.dll and Microsoft.Mom.Modules.ClientMonitoring.dll, located in the folder ~:\Program Files\System Center 2012\Operations Manager\Server.

    Also check the presence of the related MSP files in the Agent staging folders on your OM12 Management Servers (~:\Program Files\System Center 2012\Operations Manager\Server\AgentManagement\<ARCHITECTURE>). File to look for: KB2784734-<ARCHITECTURE>-Agent.msp.
  8. Common mistakes
    1. UR#1 for OM12 SP1 doesn’t contain an update for OM12 Reporting;
    2. Forgetting to update the file %windir%\Microsoft.NET\Framework64\v2.0.50727\CONFIG\web.config;
    3. Trying to import the other MPB as well;
    4. When monitoring X-plat: forgetting to update the UNIX\Linux MPs, to be found here;
    5. Running the updates with lacking permissions (make sure you’re admin on the servers, in OM12 and the OM12 databases!).

Hopefully this posting will aid you in applying UR#1 for OM12 SP1 in a better way.

Tuesday, March 19, 2013

OM12 Operations Manager Web Part Reinstall In SharePoint Fails

Bumped into this issue. Somehow the Operations Manager Web Part wasn’t properly installed so it had to be removed. Easily done, since there is a well documented procedure for it. The uninstall went fine.

So it was time to reinstall it and this is the error I got:
image

But this is the kind of error message I like since it tells me exactly what’s wrong: ‘…ERROR: Remnant files exist in the GAC, please remove and re-run the installation script...’ and on top of it, how to remedy it:

To remove these files it is necessary to quit out of the Powershell.
Stop (SharePoint 2010 Timer) service (net stop SPTimerv4).
Stop (Internet Information Services) (net stop w3svc).
Then manually delete the files listed from: C:\Windows\Assembly.
Then manually restart the (Sharepoint 2010 Timer) service (net start SPTimerv4).

Then manually restart the (Internet Information Services) (net start w3svc or II
SReset).

Remnant files found...
Microsoft.EnterpriseManagement.SharePointIntegration.DLL

Stopping both services went fine (from an elevated cmd prompt!) but locating the file Microsoft.EnterpriseManagement.SharePointIntegration.DLL  in folder C:\Windows\Assembly wasn’t a success.

But the good old cmd prompt came to the rescue: the command
dir /s %windir%\Assembly\Microsoft.EnterpriseManagement.SharePointIntegration.DLL told me exactly where that file resided:
image

Now it was easy to remove that file, restarted both services I stopped earlier and reinstalled the Operations Manager Web Part as documented. Now the deployment run without any issues at all.

Friday, March 15, 2013

Q&A Savision Webinars ‘The Saga Continues’

As stated before, Insight24 and Savision had a joined effort, all about Service Pack 1 for System Center 2012.

For this joined effort I had the honors to write a whitepaper, titled: The Saga Continues: SP1 for SCOM 2012, which is still available for download from the websites of Insight24 and Savision.

Based on this whitepaper there were two webinars, one for Europe and another for the US. At the end of these webinars the audience had time to ask questions. All those questions and their answers are to be found in this posting.

SP1 webinars Q&A Sessions

  1. Do you recommend hosting the OpsManager & DW DBs on seperate servers?
    This depends on your situation. When you’re running a SCOM environment with many monitored servers, network devices and services/applications and the number is 1000+, it is to be advised to separate the OpsMgr and Data Warehouse databases. Another situation which justifies this approach is when you’re going to use the SCOM Reporting component to a great extend. Rendering reports, querying the Data Warehouse may create a high IO on the CPU, RAM and disks. In cases like that it’s better to separate bot databases.

  2. Does SCOM 2012 have a web console view for mobile devices?
    No, it doesn’t. Or perhaps only on Windows phones since the web console of SCOM 2012 depends on Silverlight™ instead of HTML 5. However, with third party add-on from Jalasoft (Wings) you have a ‘console’ on your mobile devices, whether they run iOS, Android or Windows.

  3. Does SP1 bring anything more to a "full monitoring suite" that adds network intelligence?­
    For what I’ve heard and understood, network monitoring in SCOM 2012 won’t become a full network monitoring tool. Network monitoring in SCOM 2012 is related to the 360 degree monitoring view. So network monitoring plays an important role here but won’t make SCOM 2012 to a pure bred network monitoring tool. Network monitoring in SCOM 2012 relates to the monitored (Windows) servers so a mesh is automatically created based on their dependencies and connections.

  4. ­Do you know is the problem with running SCOM-console in other regional setting than english is fixed in SP1­?
    No, not at this moment.

  5. With 2007 R2 it was better to use SQL Enterprise. Is this still the case with SQL 2012 or can we use the Standard edition?­
    This isn’t totally true and needs more clarification. With SCOM 2007 R2 the standard edition of SQL Server was OK as well, depending on your situation. For instance when you’re monitoring huge numbers of servers, network components, services and applications, the Enterprise edition of SQL Server is a better choice. But when your SCOM environment is ‘limited’ to a couple of hundred servers, network components, applications and services, the Standard edition of SQL server will most certainly fit the bill esspecially when you separate the OpsMgr and Data Warehouse databases. This scenario is still valid in SCOM 2012 (SP1).

  6. ­Is there any fix for the problem of mac address insted of name presentation in network devices performance views?­
    No, not at this moment.

  7. ­Just need to know difference between the web transaction monitoring , synthtic monitoring & web availability­.
    There is much to tell, too much actually. The biggest difference however is that web availability does as stated: monitoring the availability of a particular website or part of a website. The remaining two go deeper and check the functionality of a website, like placing orders for instance.

  8. In SCOM will microsoft extend the dashboard views and widgets in the future­
    Yes, they will.

  9. Can you explain more about new rollup on april 2013? SP1 is already avalaible, what is this new coming­?
    First of all, Update Rollup #2 (that’s what we’re talking about here) will only touch the System Center 2012 SP1 components it’s meant for. In other words, UR#2 will mainly contain hotfixes for the System Center SP1 components involved. Besides this UR#2 release a new Exchange Connector for SCSM 2012 SP1 will be released which is more stable and has gained more functionality as well.

  10. Is there a command line available to install the SCOM agent through SCCM?­
    Yes there is. When a SCOM 2012 (SP1) Agent is deployed its control panel applet is installed with a specific DLL file which contains special .NET functions. These functions can be referenced by using VBScript and PowerShell for instance. Kevin Holman blogged about it in more detail (read the WHOLE posting Smile): http://blogs.technet.com/b/kevinholman/archive/2011/11/10/opsmgr-2012-new-feature-the-agent-control-panel-applet.aspx.

  11. Why is Microsoft not dividing 2012 MPs and 2007?­
    Simply because there is still a market out there using SCOM 2007. Also because SCOM 2007 has mainstream support up to july 2014: http://support.microsoft.com/lifecycle/?p1=13876. None the less, some MPs are only for SCOM 2012, using Management Pack Bundles and/or the special widgets. With time passing by I think/expect to see MPs using these SCOM 2012 specific functionalities more and more.

  12. ­How does multi-home clients work?
    Any SCOM (2007/2012) Agent can communicate with up to four different SCOM Management Groups (MGs). Per MG the relevant Rules and Monitors are loaded and executed. The collected results are send back to the relevant MG.

  13. Currently we have SCOM 2007 R2, can we run it "in parallel" with brand new 2012 SP1 making sure 2012 is all working good then shut down 2007 environment ?
    Yes you can. The best approach here is to install SCOM 2012 Agents since these are backwards compatible with SCOM 2007 Management Groups. So these Agents become multihomed and communicate with the SCOM 2007 MG and the SCOM 2012 MG.

  14. Should we wait for SC2012 SP1 before implementing SCOM2012 SP1?­
    SP1 for SC2012 (and SCOM 2012) is already General Available. So there is no need to wait.

  15. What is the best way to decomission the old SCOM servers after you have SCOM 2012 in place­?
    Depends on the situation. When you have started an inplace upgrade path there won’t be any SCOM server left after the upgrade to SCOM 2012 is finished, simply because the old SCOM 2007 servers are upgraded to SCOM 2012 servers. However when you an alongside scenario where the SCOM 2012 Agents are multi-homed and they talk to a SCOM 2007 Management Group (MG) and the SCOM 2012 MG as well, you have to set those Agents to talk ONLY to the SCOM 2012 MG. See Q/A #10 how to go about that. Then you can safely remove the SCOM 2007 MG by first removing the SCOM 2007 Management Servers and then the Root Management Server.

  16. Will all the 2007 R2 MPs work with 2012?
    Yes they will. Even with the RMS removed in SCOM 2012, there is a RMS Emulator Role, simply for backwards compatability for certain MPs like the Exchange 2010 MP for instance.

  17. Does SP1 bring anything more to a "full monitoring suite" that adds network intelligence?­
    See Q/A #3.

  18. What is a Rollup to a SP supposed to provide and can you just get all inclusive SP1 inclusive of the Rollup?
    A Rollup contains mostly hotfixes and patches, along with some added/extended functionality, started from the point the SP was introduced and became General Available. There won’t become a full blown ISO available containing the SP with the Rollups. However, at a certain point in time a new SP will become available, containing the previous SP and the Rollups with added functionality as well.

  19. Also Do my Management Points require a seperate SP1 and rollup?
    When you mean with Management Points the related Management Servers, the answer is yes. The SP and Rollups touch all components of SCOM: the Management Servers, the Agents and sometimes the core Management Packs and SQL databases as well. With a SP these are always ‘touched’ with a Rollup it depends on the hotfixes/updates it contains.

  20. I was told i can upgrade my VMM and my DPM to SP1 seperate from SCOM2012 is this correct?
    Yes, it’s correct. But make sure the interconnectivity between these products is updated as well so it requires additional attention, like the connections between SCVVM <> SCOM for instance.

  21. Currently running OpsMgr 2012 in Windows 2008 R2 enviroment. Is it recommended to upgrade all systems to Windows Server 2012 and then install OpsMgr 2012 with SP1?
    No, simply because SCOM 2012 RTM doesn’t support Windows Server 2012 as the Server OS upon which the Management Servers are installed. That support is only available from SCOM 2012 SP1. So the way to go is: SCOM 2012 RTM > SCOM 2012 SP1 > Windows Server 2008 R2 SP1 > Windows Server 2012.

  22. In a multi-homed agent (SCOM 2007+2012), if you uninstall the agent from the SCOM 2007 console, will it uninstall the agent completely and also stop reporting to 2012?
    No, it won’t. Simply because the multi-homed Agent in this case is ONE Agent, based on SCOM 2012. So that SCOM 2012 Agent will be removed. See Q/A #15 and Q/A #10 for more information.

Advice
Whenever you want to know more about upgrading to SCOM 2012 (SP1) or/and Windows Server 2012 / SQL Server 2012 (SP1), go here, an aggregation of many blog postings all about upgrading and the various paths and products.

Also, when you want to KNOW EVERYTHING THERE IS TO KNOW ABOUT SCOM 2012 (SP1), simply BUY THE SCOM 2012 BOOK: System Center 2012 Operations Manager Unleashed. Written by Cameron Fuller, Kerrie Meyler, Pete Zerger and John Joyner. And the contributing authors are Jonathan Almquist, Alex Fedotyev, Scott Moss, Oskar Landman and Marnix Wolf which is me Smile!
image

Microsoft MP Blog

Daniel Savage, a PM working at Microsoft, has started a series of blog postings all about the MPs delivered by Microsoft.

Daniel has taken upon him the responsibility ‘…for all things related to Operations Manager Management Packs including customer satisfaction related to MPs for Microsoft server workloads…’

And states: ‘…If you have a problem with a Microsoft MP and are not getting traction through the normal channels, please let me know…’. On the same blog posting you’ll find his contact details.

So RESPECT for Daniel and hooray for Microsoft. Now we have a single point of contact for all things related to the MPs released by Microsoft.

Since it’s a daunting task for one person, please help him. Not by bashing, flaming, but by delivering good and useful information to Daniel, all about your personal experiences with the MPs delivered by Microsoft. I have spoken with Daniel on some occasions and he’s a person with a real drive to make things better. He has a strong believe in SCOM (as do I) and wants to make it even more better. But he can’t do it on his own, so help him.

I hope to speak with him soon in order to share my experiences and thoughts.

Wiki: Microsoft Management Packs

Update 03-21-2013: A reader of my blog pointed it out to me: I forgot to add the link to the Wiki! That’s a bummer, but I corrected it. A big word of thanks to Thomas IV Smile.

Some days ago Daniel Savage, a Microsoft PM, started a new Wiki, all about the Microsoft Management Packs.

This Wiki contains links to all the released MPs with their version number and release date. So there is a one-stop-shop where you can find all the MPs made by Microsoft.
image_thumb[4]

On top of it, this same Wiki also shows which MPs are ‘in the pipeline’ and to be expected. Also whether the MP is a total new one or an iteration to an existing MP.
image

I am happy to see this initiative. Wiki to be found here.

Management Packs: The Pain & The Glory

When looking at SCOM, no matter what version, it’s the Management Packs (MPs) what’s make it tick. Simply because without any additional MPs, no monitoring – except for SCOM itself - will happen. So SCOM delivers the framework for the monitoring and the MPs deliver the monitoring functionality.

And as we all know, the quality of the MPs differs. A lot. Some MPs are really good and shiny examples where as others aren’t that good or outright bad and bring down the overall monitoring experience.

And yes, I am singing this song for quite a few years now, even before I became a MVP. Being a MVP brings many advances (thank you Microsoft). One of those advantages is that you get to know some of the people behind those very same MPs. Which makes it harder to criticize those MPs because before you know it, it may become a personal thing. But that’s the least thing I want to start. No bashing, nor flaming or criticizing people on my blog. My intensions are to make a product better by delivering objective criticism, aimed at the products and not at the persons involved.

My Top Ten of my MP personal wish list
Working with SCOM and all it’s iterations and many MPs (from Microsoft and many third parties) I have built myself a personal wish list about how a MP should look like and more important, function. This my Top Ten:

  1. MPs should be divided in to different building blocks:
    1. Library MP;
    2. Discovery MP;
    3. Monitoring MP;
    4. Presentation MP;
    5. Reporting MP;
  2. For Item 1: Per product version only items 1.2 and 1.3 should be different and limited to that version only. So the dependencies should be right and not become a spaghetti.
  3. Every MP should contain good and usable Reports;
  4. Every MP should have a two guides:
    1. A quick start guide (to get you up and running fast);
    2. A comprehensive guide covering all aspects, also ALL Rules, Monitors, Discoveries and so on.
  5. Every MP should have many (or even all) Discoveries disabled, enabling a phased roll-out with:
    1. Two to three override MPs, enabling some or more overrides;
    2. Discovery Helpers, like the ones present in the Exchange 2007 MP.
  6. All MPs should adhere to a standard of monitoring, reporting, dashboards and presentation;
  7. All MPs should adhere to a standard naming convention;
  8. All MPs should be configurable by using a Wizard driven interface, asking questions and creating the correct override MP at the end;
  9. All MPs should be configurable by using the same Wizard as stated in Item 8;
  10. All MPs should use the same approach for monitoring, alert handling. So expeditions like the Correlation Engine are a no-go OR should become standard for ALL MPs.

Three other things which are important:

  1. All MPs should come out OK when run against Management Pack Best Practices Analyzer (MPBA). When not, the MP should be fixed before being published;
  2. All MPs should be tested thoroughly in serious production environment before being published;
  3. All Microsoft products MUST be covered by MPs adhering to the Top Ten MP wish list.

This will make the MPs better and SCOM as well. I wonder what your thoughts/ideas are about this topic. Please feel free to share.

Tuesday, March 12, 2013

A Dashboard Is Nothing But A Dashboard…

Some time ago a regular reader of my blog contacted me. Let’s call him Pete.

Pete had started using Savision Live Maps and was impressed by the possibilities and the power. So far so good. But the management was also very happy about it and wanted him to deliver dashboards that would envy mission control of NASA

(Picture is borrowed from Wikimedia.)

…and would outsmart the state of the art cockpit of a Boeing 787 Dreamliner…

(Picture is borrowed from Wired.)

And now there was ‘some’ pressure on Pete which is highly understandable. Because dashboards which are nested, show everything into the smallest detail and yet show all interconnectivity as well, aren’t really easy created. So this is what worried Pete to a huge extend. How to deliver dashboards like these? How to elevate PS and VB scripting in order to get there?

So Pete contacted me and asked how to go about it.

Since Pete’s situation isn’t really unique I decided (with his agreement) to write a blog posting about it.

Back to the organization
First of all dashboards like these aren’t going to stick at all. Main reason is the organization behind it isn’t totally up to specs. May sound harsh but hear me out on this one. WHY does the management want a 787 cockpit and mission control? For what purposes? Aimed at what and more important, whom?

And because the management never told him that, Pete would never be able to deliver it . It will never be what they want simply because they don’t know it exactly for themselves. Technology isn’t the answer here, but processes, people and organization.

Anecdote
In order to make it more clear I told Pete an anecdote which happened to me many years ago. It was one of my first lease cars. Brand new but already after a week it was difficult to start it. So to the dealer I went. A mechanic took a look, tried to start the car and experienced the same issues. He told me he would locate the issue and fix it. To my surprise he didn’t open the bonnet of the car but hooked the car to a computer. The computer would locate the issue in a matter of seconds he told me, then he would fix it.

But the computer told him all was just fine. So he ran multiple diagnostics and still the computer told him all was OK. Then he called his superior and together they ran the same diagnostics! After an hour I almost begged them to open the bonnet and to take a look. But no, the computer would them what was wrong. I was getting upset because it took too much time and I had an important appointment. So I opened the bonnet myself and then I had a big laugh!

Even I, a total amateur on cars, saw what was wrong: an air hose was totally cracked which fed into the starting engine. So when trying to start, it got too much air! Both mechanics were totally embarrassed and fixed it with some tape and told me they would order a new part for it.

Metaphor
When I drove to my customer I realized this was a shiny example of computers and their limitations as well. The computer checked the engine while running looking at temperatures, compression/pressure ratio, and the mixture air/fuel. And yes, that was all OK. Within specs. So the computer told them what it measured. But the mechanics using that computer forgot one crucial aspect: the computer had a LIMITED view of the whole car!

Using the car anecdote as a metaphor there are many striking resemblances:

  1. The car represents the IT services any IT department delivers;
  2. The driver of the car represents the end-users which ‘consume’ the IT services the IT department delivers;
  3. The computer running the diagnostics represents the cockpit/flight-control center the managers are demanding;
  4. The broken air hose represents the root cause of a failure resulting in a disruption of the IT services which affects the end users;
  5. The mechanics represent the managers looking at the dashboards, telling the end users everything is OK while they experience issues.

Hopefully you’ll see what I am driving at. In Step 5 the managers will realize sooner or later that something is amiss but not shown on their flashy shiny posh dashboards. So they go to the one who built them. It won’t be a nice conversation I am afraid…

Perception management
Therefore it’s better to stay away from situations like that and to start at the foundation, which is perception management. Let management know that no matter what they’re looking at, it will always be a scoped view of reality. Simply because you can’t take everything into consideration. Otherwise you’ll end up with way too many items and loose track and control as well.

A 787 Dreamliner cockpit is an excellent example. Instead of showing everything it shows the status of the systems involved. And yes, the pilots can zoom in into the very detail of the smallest component, but only when required. That’s normal for a cockpit but not for an IT department. When they want such functionality, management must invest heavily. Not only in licenses but also in time and resources and training. This can’t be built by one man!

Too many times I bump into situations like these and I have to make management see what they’re really asking. Which I call perception management. As an outsider it’s easier to get the message across but can be a challenge as well. By telling management how many licenses, time, resources and man hours they require to deliver what they want brings them to their senses, enabling one or more projects which are SMART. Like delivering dashboards depicting the most crucial IT services, composed of the components, monitored by SCOM.

This brings one back to SCOM as well, since the basics (component monitoring) have to be spot on. So only Alerts which means something to the organization are shown and even more important, relevant State Changes (Healthy > Warning > Critical). Since those State Changes will be used on the dashboards. So it’s crucial to get them right.

Where the dashboards ends and people, processes and organization kick in…
Again, this is something that shouldn’t be done by one or more IT persons. Simply because they represent a small piece of the whole puzzle. What about Change Management, Application Owners, DBA’s, end-users, Service Managers, and application developers to name a few?

What about an Alert or a State Change on a Dashboard? Who is going to do what/when and how? How is everything logged and tracked? When is something an incident and when not? Who decides that? How are escalations performed?

What Alerts are scoped to working days and –hours only and what Alerts are processed on a 24/7 hours basis? And why?

As you can see, a dashboard is only a dashboard and nothing more. There is way much more to it, like people, processes and organization. Discuss it with the managers involved. Make them realize a dashboard needs way more attention. Otherwise they’re looking a scoped view of the reality and missing out tons of information and lacking the proper execution when something goes wrong.

The push
Also important: this has to be driven top down. So management has to know this, make it clear as in a project and drive it through the organization. Otherwise it won’t stick and work at all. Budget and resources must be allocated as well.

Recap
As you can see, good relevant dashboards are key to proper monitoring. But nothing more but means to an end. When the proper processes aren’t in place and the organization lacks execution, a dashboard won’t change that. Therefore it’s crucial for organizations to get that on a level as well. Only then dashboards will add value to the organization as a whole.

Want to know more?
Check out these url’s:

  1. Microsoft has written tons of good information about it:
    Core Infrastructure Optimization
  2. A three part series of postings I wrote some time ago (but still valid): http://thoughtsonopsmgr.blogspot.nl/2009/11/opsmgr-where-technology-ends-and.html

Friday, March 8, 2013

Final Call: Whitepaper ‘The Saga Continues’ Available For Download AND JOIN THE WEBINAR!

As stated before the FREE whitepaper, all about Service Pack 1 for System Center 2012 is available for download.
sav13021801_header_mailing 

Also two webinars, hosted by me, about this whitepaper will be given next week:

  1. March 12th at 10.00 AM Central European Time Zone;
  2. March 14th at 3.00 PM Eastern Time Zone.

Hope to see you all there. Bye!

Thursday, March 7, 2013

System Center Advisor (SCA) Is A Free Service!

Yesterday Microsoft published a blog posting all about SCA. The most important news to share was this: ‘…as of January 2013 , Advisor is free to Microsoft customers in supporting countries. Trial periods are a thing of the past and Software Assurance is no longer required for taking advantage of the Advisor service…’.

These supporting countries are:
image

Even though the SCA architecture is totally SCOM based (including MPs, a Gateway Server) it’s anything but a REAL TIME monitoring solution.

Data is collected, based on the MPs which are customized for SCA. Once a day (in the night) actually the collected data is sent to the cloud and processed. Based on Best Practices, experiences by Microsoft Customer Support Services and so on, feedback is given about the configuration of the monitored products.

So this way you’ll know whether your Microsoft based environment is up to specs. Microsoft technologies supported by SCA are:

  • Windows Server 2008 and 2008 R2:
    • Active Directory
    • Hyper-V Host
    • General operating system
  • SQL Server 2008 and later
    • SQL Engine
  • Microsoft SharePoint 2010 and later
  • Microsoft Exchange Server 2010 and later
  • Microsoft Lync Server 2010

Personally I guess there is more to come, not only about the supported Microsoft technologies but also about the functionality of SCA itself. At least that’s what I think based on this entry in the same blog posting: ‘…Last but not least...stay tuned for some exciting Advisor news during this years MMS 2013…’

On itself offering SCA to Microsoft customers is a logical step. Why not start trouble shooting before issues really start happening? So prevention is key here. Now with Azure and SCA this is feasible and will save Microsoft in the long run money and make the way their products are received and evaluated only a better experience.

For myself I wonder how long it will take before SCOM itself becomes a cloud service offering by Microsoft. Not too long I guess.

Want to know more about SCA? Check out these links:

  1. Detailed information about SCA;
  2. Getting started with SCA;
  3. Supported countries for SCA;
  4. Previous postings about SCA on my blog.

I’ll keep you posted about the latest SCA developments.

Monday, March 4, 2013

DMZ Monitoring Of Windows 2003 Servers Fails: Root Certificate Not Accepted

On a customers location some DMZ servers had to be monitored. These servers were part of a Workgroup so Kerberos wouldn’t do here for the required authentication and encryption. Instead special certificates were used, issued by a CA based on Windows Server 2008 R2 SP1 Enterprise.

Everything worked fine until some Windows 2003 Servers residing in the DMZ had to be monitored.

The issue
Even though the CA Certificate Chain was properly loaded in the Trusted Root Certification Authorities store of the computer account of the Windows 2003 servers, the SCOM certificate gave an error: The integrity of this certificate cannot be guaranteed. the certificate may be corrupted or may have been altered.
image

The Certificate status, found under the tab Certification Path gave this error: This certificate has an nonvalid digital signature.
image

Again, we were puzzled. It was the same CA Certificate Chain used on the Windows 2008 servers where it caused no issue at all. And the SCOM Certificate was loaded in the proper store as well.

Since all Windows 2003 servers had the same issue, something else was happening here. An incompatibility issue most probably between the CA based on Windows Server 2008 R2 and Windows Server 2003.

The cause
After some searching on the internet, we bumped into KB968730 stating: ‘…Windows Server 2003 and Windows XP clients cannot obtain certificates from a Windows Server 2008-based certification authority (CA) if the CA is configured to use SHA2 256 or higher encryption…’

In other words (taken from this blog posting so all credits go to Tim Jacobs!): ‘…Windows 2008 has several new additions to the cryptography API, called Cryptography Next Generation (CNG), that are used in the V3 certificate templates for CA's and Webservers in Windows 2008. Amongst those new features is support for new certificate signing algorithms which is not recognized by older clients…’

In the days this posting was written the ONLY solution available was to REINSTALL the CA! But that’s something you don’t want to do except when there is no other solution available.

The fix
Gladly years have gone by and Microsoft published a hotfix for it which solves this issue on the Windows 2003 servers. So no need to reinstall your CA Smile.

KB968730 has this hotfix available for download.

Recap
Whenever you need to monitor DMZ servers, or other Windows servers which reside outside the trust boundary of your SCOM MG and those servers are Windows Server 2003 based AND your CA is Windows Server 2008 based, changes are you’re going to need the hotfix listed in KB968730.

Be Careful When Configuring Agent Heartbeat Interval. Otherwise Tons Of EventID 20022 & 20021…

Bumped into this situation at a customers location. Every hour the OpsMgr event logs of the RMS and MS servers wrote tons of EventID 20022, source OpsMgr Connector:

‘’…The health service {<GUID>} running on host <FQDN NAME AGENT> and serving management group <MG NAME> with id {<GUID>} is not heartbeating…’

And seconds later the same logs were flooded again with EventID 20021, source OpsMgr Connector:

'…The health service {<GUID>} running on host <FQDN NAME AGENT>and serving management group <MG NAME> with id {<GUID>} is available through the server <FQDN MS SERVER>..

This went on and on for months. The customer had no clue what caused it. Already the network was scrutinzed for issues but nothing was found.

So it was time for an investigation.

Cause
This MG is monitoring many Windows servers and the MG itself resides on a separate VLAN. First I suspected the MG to be too loaded by monitoring too many Windows Servers with not enough MS servers. But that wasn’t the cause. The MS servers were pulling their weight but were operating within their limits. The RMS was in good shape as well.

The SQL servers were also in good shape so no issues there as well. Then it was time to look at the connections, especially from the VLAN where the SCOM MG resides to all monitored Windows Servers. But the network specialists ensured me all is well on that part. No connectivity issues at all.

So back to SCOM it was. Time to look at the heart beat interval settings for the Agent (Administration > Settings > Agent > Heartbeat). And this surprised me. From the default interval setting, 60 seconds, it was lowered to 20 seconds. For all monitored Windows Servers this setting is enforced…

On the server side the heartbeat setting (Administration > Settings > Server > Heartbeat) was default, 3 missed heartbeats. But combined (3x20 seconds) there is only a time range of 60 seconds where an Agent is allowed not to communicate with the MS servers before an Alert (EventID 20022) is raised.

So whenever there is a small hiccup on the  network, changes are the event logs of the RMS and MS servers will be flooded. First by EventID 20022, telling you there is no communication, and a second later tons of Event ID 20021 telling you all is OK again.

Why
Of course there is always a reason for it, so I asked why this setting was modified. The customer told me they have some issues with a certain set of servers.

These servers might reboot and come back online very fast since they’re VMs on very good virtualization hosts. Yet the customer told me they needed to know whether the server rebooted so they lowered heart beat interval settings from 60 seconds to 20 seconds. Simply because the normal heartbeat interval combined with the server interval, was too much for the VMs. They simply rebooted and were fully functional again within the time range of 3 minutes!

Solution
Now the customer realized this wasn’t the way to go since it caused other unmentioned side issues as well. So they asked me what to do instead.

This is what I advised in order to solve the flooding of the event logs:

  1. Modify the heart beat interval to the default setting, which is 60 seconds;
  2. Monitor the OpsMgr event logs of the RMS and MS servers for the rest of the day in order to see no more flooding takes place.

This part will take care of the flooding issue. And yes, after this modification the flooding didn’t happen anymore. Which is way much better.

Part two my advice in order to be alerted (and even report upon!) the problematic set of servers which reboot too often:

  1. Create a Group containing the set of servers they want to know they rebooted;
  2. Write a Monitor or Rule (depending on what functionality they really want) in order to catch the reboot of that set of Servers, targeted against that Group (create the Rule/Monitor, targeted against a general Group like Windows Servers for instance, disable it and enable it through an override targeted against the Group created in Step 1).

This way SCOM is used which it’s meant for and when using a Rule it can be piped into the Data Warehouse which can be used for a customized Report, telling the customer what servers rebooted when during a certain time frame.

EventID’s you can track are (all to be found in the System Log):

  • EventID 6009 (<WINDOWS VERSION> Multiprocessor Free);
  • EventID 6005 (The Event log service was started);

Recap
Whenever there are some Windows Servers which require special attention because they reboot too many times / too fast, don’t use the Agent Heartbeat Interval option to identify those servers.
image

This setting will affect ALL monitored Windows Servers and is most likely to result into unwanted side-effects. Better is to create a Rule/Monitor aimed at catching specific EventID’s telling you the server rebooted.

Even when you want to modify the Agent Heartbeat Interval setting, please do so on a per Windows Server basis (Administration > Device Management > Agent Managed > select the Agent you want to modify, double click it > first tab Heartbeat > select option Override global agent settings > now you can modify Heartbeat Interval (seconds))
image

Another advice:
It’s Best Practice not to lower the Agent Heartbeat interval since 60 seconds is already tight enough. Many times it’s set a bit higher (with increments of 10 seconds) on a PER AGENT basis when those Agents reside in a part of (remote) network which has some latency issues.

Hopefully this post prevents the flooding of the OpsMgr event logs with EventID’s 20022 and 20021.