Thursday, May 21, 2015

SCCM 2012 R2 SP1/SP2 Confusion

Confusion?
Besides SCOM I also ‘touch’ SCCM 2012x environments. And since SP1/SP2 for SCCM 2012 R2 is available for download I got many questions about the SP1/SP2 updates for SCCM 2012 R2. With this posting I hope to clarify it and not to make the confusion even bigger.

The confusion I see is about two topics:

  1. SP1 and/or SP2 for SCCM 2012 R2. What update package to apply?
  2. Updates can be downloaded from TechNet EVALUATION Center as well. Can you apply these updates in production or not?

1st Confusion: SP1 and/or SP2 for SCCM 2012 R2?!
Since Microsoft had two different code bases (SCCM 2012 & SCCM 2012 R2) their engineering process wasn’t 100% efficient. Therefore Microsoft decided to merge both code bases in order to achieve a maximum level of efficiency, resulting in a higher product quality.

With the Service Packs for SCCM 2012 (SP2) and SCCM 2012 R2 (SP1) this code base merger is started. Therefore you’ll find TWO different update files, representing TWO different Service Packs:
image
So far so good. But unfortunately, the naming convention Microsoft used for these files adds only to the confusion…

SC2012_SP2_Configmgr_SCEP.exe
This update file contains SP1 for SCCM 2012 R2. But it ALSO contains SP2 for SCCM 2012 SP1. This is THE file where the earlier discussed code merger is executed.

Product in Place Product after upgrade Required file
SCCM 2012 R2 SCCM 2012 R2 SP1 SC2012_SP2_Configmgr_SCEP.exe
SCCM 2012 SP1 SCCM 2012 SP2 SC2012_SP2_Configmgr_SCEP.exe


SC2012_R2_SP1_Configmgr.exe
This update file contains the R2 features for SCCM 2012 SP2 environments. I know, the name of this update file adds to the confusion. But this update file is ONLY meant for upgrading SCCM 2012 SP2 environments to SCCM 2012 R2 SP1 level.

Product in Place Product after upgrade Required File Remark
SCCM 2012 SP2 SCCM 2012 R2 SP1 SC2012_R2_SP1_Configmgr.exe First install update file SC2012_SP2_Configmgr_SCEP.exe

2nd Confusion
The required update files are available from different locations:

  1. TechNet Evaluation Center (SP1 or SP2);
  2. MSDN;
  3. Microsoft Volume License Service Center (from 27th of May 2015).

Especially the first download location causes some confusion. Meaning: Can I roll out these update files since they came from an EVALUATION website?

In this case the answer is YES, you can. The update files are all the same, no matter what download location you use.

Used sources
This posting came to be using these resources. So all credits go the people running these blogs:

Monday, May 18, 2015

NiCE Free Log File MP & Regex & PowerShell: Enabling SCOM 2 Count LOB Crashes

Issue
Suppose you’ve got a line of business application (LOB) which logs everything into a log file (verbose logging). Among those log entries are also the crashes of the LOB itself.

And now some people in your organization have heard about SCOM and are responsible for the total performance and availability of that LOB. Even though SCOM already monitors that LOB on many levels (Windows Server OS, SQL databases, SQL reporting, IIS related web sites and services and specific services and processes) these same people would LOVE to have a View AND a Report in SCOM all about the total amount of LOB crashes per previous day.

Why? SCOM is sold to them as ‘the single pane of glass’. That’s why! Smile

So now SCOM has to collect that data and plot it into a performance graph in the SCOM Console and to pipe that same data into a Report. On itself nothing strange since SCOM can plot anything as long it’s a performance counter.

The challenges…
Okay. Guess by now you already spotted the first challenge? But there are more…

  1. So here is the first and most obvious challenge: ‘…as long it’s a performance counter…’. But those LOB crashes aren’t performance counters at all. These are written PER app crash as an entry to the log file.
  2. Somehow ALL the app crashes of the previous day for the LOB have to be collected and counted in order to get a total.
  3. Sometimes (not always…) the default log file is broken off, saved to another format and then a new log file is started. And the LOB app crash information of previous day can be found in either one of those log files…
  4. And last but not least, it would be TOTALLY AWESOME when this solution would be portable to other LOB’s as well. So not too many customizations please.

The different components required to address the challenges
There are multiple ways to address these challenges. One would be to author a MP which creates a data source based on that log file, get’s the proper information and put it into a property bag and send it to SCOM which processes it as a performance counter.

However, I am anything but a MP author. So that’s a no go area for me. I know the theory but lack the serious skills to get it working in a proper manner. Therefore I required solutions already available and after some discussions with some peers this is the list of ‘ingredients’ I got in order to address the various challenges:

  1. NiCE Free Log File MP
    With this MP one can map entries in log files to performance counters, used by SCOM. Some good regex is required in order to get the job done.

  2. PowerShell
    With PowerShell one can examine files (log files as well!) and check AND count certain strings, even certain combinations. This collected data can be piped into another file when required.

  3. PowerShell (again, this isn’t a typo)
    When there are multiple files to be checked as described in Item 3, with some good PS scripting this can be solved as well.

  4. Using free available software & solutions
    When using Items 1 to 3 AND document it properly, you’ve got a solution which is portable to other LOB’s as well. Awesome!

Now we’re in business. It’s time to build the solution. But before I start I want to introduce my test environment.

Meet my test environment
For this posting I built myself a new LOB environment, or better a simulation of it. There is no LOB at all in my test environment, only the stuff which is required in order to make this test work.

  • LOB verbose log folder on my SCOM test server: C:\Program Files\Business Critical App\Logs;
  • REGULAR LOB verbose log file: LOB_Log_Verbose.log;
  • Discontinued LOB verbose log file: LOB_Log_Verbose.log.YYYYMMDD (e.g.: LOB_Log_Verbose.log.20150517).

    This is what it looks like:
    image

    Time to start rocking!
    So now we know the challenges and the way how to address them. Now it’s time to dive into the specifics.

    1. Download the FREE NiCE Log File MP and import it into your SCOM environment;

    2. As stated before the default log file (LOB_Log_Verbose.log) can be broken off, and saved into a special format (LOB_Log_Verbose.log.YYYYMMDD). And the required app crashes can be found in one of those files. These issues must be solved with PowerShell.

    3. When using the NiCE Free Log File MP one can also use regex for the name of the log file. However, the log file can become pretty big requiring a lot of time to go through it all. Also so having to use regex to count ALL app crashes of a previous day can be pretty daunting when you’re not that familiar with regex.

      So why not use PowerShell here as well? Meaning PowerShell looks for BOTH log files mentioned before, counts the total app crashes of LOB of the previous day AND pipes that information into a NEW log file with a far more easier format, like this:
      image

      Let’s name this new log file AppCrashesCountPerDay.log and use that log file as a target for the NiCE Log File MP. Now it’s far more easier to manage things on the SCOM side of things since we’ve got an easier log file to monitor which is kept outside the LOB verbose log itself and won’t be renamed nor removed. Based on that we can use absolute names and paths, making it even more easier on the SCOM side of things Smile.

      This PS script will take care of it all. Plan it with Task Scheduler on the LOB server to run once a day. The very same PS script can be downloaded from my OneDrive
      #############################################################
      # Script to count the total amount of LOB crashes of the previous day
      # This number is written to the log file 'AppCrashesCountPerDay.log'
      # Written by Marnix Wolf
      #############################################################

      # Set variables and create file name 'LOB_Log_Verbose.log.YYYY-MM-DD' based on previous day
      $PreviousDay = "{0:yyyy-MM-dd}" -f (get-date).AddDays(-1)
      $LOBLog = "LOB_Log_Verbose.log.$PreviousDay"
      $LOBLogCheck = "C:\Program Files\Business Critical App\Logs\LOB_Log_Verbose.log.$PreviousDay"

      # Tests whether correct LOB_Log_Verbose.log file exists. If so, run a total count of the related LOB crashes that previous day
      If (Test-Path $LOBLogCheck){
      $TotalLOBErrorCount = (Select-String -Path "C:\Program Files\Business Critical App\Logs\$LOBLog -Pattern $PreviousDay" | Select-String -CaseSensitive -Pattern AppCrash).count
      }Else{
      $TotalLOBErrorCount = (Select-String -Path "C:\Program Files\Business Critical App\Logs\LOB_Log_Verbose.log" -Pattern $PreviousDay | Select-String -CaseSensitive -Pattern AppCrash).count
      }

      # Write output of correct log file to AppCrashesCountPerDay.log in specified format
      Out-File -filepath "C:\Program Files\Business Critical App\Logs\AppCrashesCountPerDay.log" -InputObject "$PreviousDay TotalAppCrashes = $TotalLOBErrorCount TimesTotal" -Encoding ASCII -Width 50 -Append

      # End of script

    4. Let’s build the special Rule, based on the NiCE Log File MP: SCOM Console > Authoring > Management Pack Objects > Rules >  right click > context menu > Create a new Rule;


    5. NiCE Log Files > Performance Rule > Advanced > Performance Rule (Advanced)
      image
      > Next


    6. Don’t enable the Rule! We’ll enable it later through an Override targeted against a Group which is explicitly populated. Target the Rule against Windows Server or a less generic Class:
      image
      > Next


    7. Skip the Preprocessing Settings screen > Next
      image
      > Next


    8. As you can see, thanks to PowerShell I can use absolute names and paths. Awesome! Makes it easier to troubleshoot:
      image
      > Next


    9. Save yourself a lot of pain and effort. Just hit the Regex testing tool button Smile. This tool makes life much easier. A BIG thanks to NiCE for this tool.
      image


    10. In the box Logfile Line paste a log file entry the NiCE Log File Rule must look for. In the box Filter Regex Pattern enter the regex required to extract the information. Use the key with > sign for more help to build your regex or go to https://regex101.com/for some online help/testing.
      image
      The Sample Output (Xml) screen is KEY here for your success!!! As you can see there is an entry which starts with <TotalAppCrashes>. The next entry is <Capture> </Capture>. As you can see in the example, the total app crashes of the previous day (2 in total) is captured!

      This tells you the regex is okay and the output is working! Please note the TotalAppCrashes entry since in the next screen it will be used as the performance counter we’re looking for.

      > OK.


    11. Now you’re back in the previous screen. However the regex is there and it’s tested on it’s proper functionality and output.
      image
      > Next


    12. Here you can go nuts and enter any name you like for the first THREE fields. But keep it professional please Smile. The MOST CRUCIAL field is the last one, VALUE. Use this context: $Data/RegexMatch/[confirmed output in Step 10].

      So in this case it becomes: $Data/RegexMatch/TotalAppCrashes$:
      image
      > Create.


    13. Now the Rule is built. Create a Group and explicitly add the server where this Rule must run. Set an Override on the newly created Rule for this new Group and wait some time (a few minutes).


    14. Create a new Performance View in SCOM using this new Rule:
      image


    15. Add MANUALLY a new line in the correct syntax to the log file AppCrashesCountPerDay.log and save the modification. Do this with some time lapses between the new entries. Don’t forget to save the file after every modification! You do this in order to test the correct working of the new Rule you just made, like this:
      image

      When all is okay, this will be shown in the SCOM Console:
      image
      Of course, normally you’ll get ONE data point per day. But this is the test, remember? Smile

      And when things don’t work, please check the OpsMgr event log of the related server. Changes are it will contain events logged by the NiCE Log File MP why certain things don’t work (wrong path, wrong log file name and so on).


    16. When the performance graph works, it’s simple to create a Report: Reporting > Microsoft Generic Report Library > Performance. Use the newly made Rule as a performance Rule, targeted against the correct server. Use this posting for more details about how to make such a report.
      image

      And:
      image
      Please remind that data in the Report comes from the DW which takes a few hours to get aggregated. So you’ll see the data up to two hours ago.

    Recap
    As you can see, the NiCE Free Log File MP rocks! And with some easy PS you can make life even more easier for yourself and SCOM. For me personally the manual provided by NiCE for this MP helped me a lot in order to get things up & running. And please note that also the SCREENSHOTS in that same document can be of a great help.


  • System Center 2016 Technical Preview 2: Open Source Software MPs

    For System Center 2016 Technical Preview 2 Microsoft released new Open Source Software (OSS) MPs enabling monitoring favorite open source applications running on Linux - Apache HTTP Server and MySQL Server.

    As Microsoft states: ‘…The OSS management packs join Microsoft’s current management pack offerings for UNIX/Linux operating systems and Java Application Servers. Paired with the Linux management packs, System Center Operations Manager is now the IT Admin’s one pane of glass for full LAMP stack monitoring!…’

    Want to know more? Go here.

    Gone In 90 Seconds?

    Just kidding. Microsoft has released some cool videos all about OMS. One of those videos lasts about 90 seconds and contains a quick introduction of OMS:image

    The posting also contains other OMS information like a slide deck (OMS overview & features) and PDF file (about onboarding).

    Taken directly from the same posting:

    OMS: Configuration Changes For IIS Log Collection

    When you’re using OMS (Operations Management Suite) AND using IIS Log Collection, this is IMPORTANT news.

    Taken directly from the System Center Operations Manager Engineering Blog: ‘…On Wednesday, June 3rd we (Microsoft) will be pushing down Operations Manager rules that will change how your IIS logs are sent to OMS.’..’

    And: ‘…Currently we push all IIS logs through your management servers before being sent to OMS.  We’ve found that in some cases this was putting too much load on the management servers.  In order to fix this, we’re going to be routing these logs from the agents straight to OMS – bypassing the management servers completely…’

    The same posting also provides details about some config changes you’ve got to make in SCOM. So pleas READ this posting when you’re affected and carry out the modifications as stated.

    Feedback For SCOM 2016(?) Required

    Microsoft is interested in your feedback related to SCOM. Therefore they’ve launched a survey in order to gain a better understanding what drives your business and YOU. This enables them to improve the next version of SCOM.

    The more feedback they get, the better. So be frank and open. Don’t flame nor bash. But tell them what you want to see in the next version.

    I already filled out ‘my’ survey Smile.

    Want to join? Go here.

    Cross Post: OM News From Ignite 2015

    The System Center Engineering Team has published a posting on their blog which refers to all SCOM related news from Ignite.

    When you didn’t attend Ignite and want to watch the SCOM related sessions this posting is the place to be.

    Wednesday, May 13, 2015

    Teaser: How To Use SCOM To Count LOB Crashes Using NiCE Log File MP & PowerShell

    Sometime ago I helped a customer to use SCOM in order to count their LOB application crashes as a performance counter and to create a report for it and a performance View in SCOM.

    In order to get the job done I used the FREE NiCE Log File MP, PowerShell & some regular expressions. Yes, it took me some time to get it right, but it works and that’s awesome.

    So I am going to rebuild this in my own lab since I don’t want to share customer specific information. When that setup is working I am going to blog about it in order to ‘spread the word’ Smile.

    Stay tuned!

    Cross Post: Automating Run As Account Distribution

    This is TOTALLY awesome: A PS script has been written by some of the best SCOM PFE’s (Kevin Holman, Matt Taylor, Scott Murray, Mark Manty, Tim McFadden, and Russ Slaten) which automates the distribution of the Run As Account.

    This script saves you a lot of time and effort. Want to know more? Go here and be amazed!

    Credits
    All credits for this awesome PS script go to Kevin Holman, Matt Taylor, Scott Murray, Mark Manty, Tim McFadden, and Russ Slaten. Thanks guys for your time, effort AND sharing!!!

    Thursday, May 7, 2015

    PowerShell: Auto Closing Alerts By Rules & Reckoning UTC Time With Day Light Savings

    Ouch. That’s quite a title. But something I bumped into some time ago. There is much to tell & share so let’s start.

    Issue
    By default SCOM stores everything in UTC time in it’s database. However, the SCOM Console shows those dates based on the local time settings of that ‘box’.
    In the Netherlands we use (UTC + 1:00), so there will be a delta of 1 hour between the times shown in the SCOM Console, and the time as it is in the SCOM database. And when Daylight Saving Time is being used AND it’s ‘summer time’ that delta will be 2 hours.

    Example:
    When looking in the SCOM Console for all Alerts with the Name Power Shell Script failed to run, this is what the SCOM Console in my test lab shows me:
    image
    Mind you, 4:24:22 PM is 16:24:22 and 4:24:21 PM is 16:24:21.
    However, when I use PowerShell in order to get these two Alerts by using this PS one liner
    Get-SCOMAlert -Criteria "Name='Power Shell Script failed to run' AND ResolutionState = 0" | FT Name, LastModified

    this is what I see:
    image
    As you see, there is the earlier mentioned delta of two hours!

    What’s the BIG deal here?
    On it self it isn’t shocking. However, suppose you want to schedule a PS script – once per hour – which closes all Alerts generated by Rules which are 48 hours or older.

    In this case – because of the delta of two hours – also Alerts generated by Rules which are 46 hours old, will be closed by that same scheduled PS script as well.

    One could say, well a delta of 2 hours on 48 hours isn’t that much. So instead, modify the PS script and set it to 50 hours instead.

    Yes, that could work. But what about the Daylight Saving Time? When it’s winter time in many European countries, the clock is adjusted. So now the delta becomes 1 hour instead. So now the PS script should be adjusted to 49 hours instead.

    Changes are people are going to forget this. But there is an even bigger deal here.

    Suppose you want to close a certain set of Alerts generated by a certain set of Rules for a certain set of systems which are TEN MINUTES or older? Since the delta is at least one hour, that same PS script will close ALL those Alerts, even when they’re totally new, like one second old.

    Now that’s BAD!!! Because no matter how you schedule such a PS script, sooner or later Alerts will be closed by that PS script which shouldn’t be closed at all. So people are going to miss out on certain Alerts and when they find out, they’re going to complain. And then the culprit (the badly written PS script will be found and after that the person who wrote it will have to explain him/her self…)

    PowerShell blame game or to the rescue?
    I don’t know about you, but I LOVE PowerShell. Simply because it’s SO powerful AND there are so many resources out there. Many times I find myself using the Help function in PS in conjunction with PS scripts I find on the internet (thank you all for sharing!) and modifying it to the requirements of my customer.

    So in this case I don’t blame PowerShell since it does exactly as ‘told’ but I use it in order to make it a bit more smarter, so it calculates the delta between the UTC time and local time on the box (in the Netherlands it’s either 1 or 2 hours) and uses that same delta in order to calculate the new offset used to filter the correct Alerts.

    This way this PS script becomes ‘Set & Forget’ since it doesn’t require any modification ever, no matter it’s summer or winter time. Awesome!

    In the case of the PS script which has to close certain Alerts generated by certain Rules for a certain set of systems which are TEN MINUTES or older, the part of the PS script which calculates the exact delta and the resulting offset, looks like this:
    $CurrentDate = Get-Date
    $UTC = $CurrentDate.ToUniversalTime()
    $Diff = $CurrentDate - $UTC
    $Conversion = [string]$Diff
    $Conversion = $Conversion.Replace(":00:00", "")
    [int]$TimeToAdd=$Conversion
    $AgeMinutes = $TimeToAdd*60+10

    In just 7 simple lines of PS code (of course I know there are multiple ways to reach the same goals, especially with PS. So feel free to comment) :




    1. The current date is retrieved ($CurrentDate = Get-Date),
    2. Recalculated to UTC time ($UTC = $CurrentDate.ToUniversalTime()),
    3. The difference between the local time and UTC time is calculated ($Diff = $CurrentDate - $UTC),
    4. Casted into a string ($Conversion = [string]$Diff),
    5. Stripped from all the unneeded zero’s and so on ($Conversion = $Conversion.Replace(":00:00", "")),
    6. Casted again, now to an integer ([int]$TimeToAdd=$Conversion),
    7. And FINALLY used to calculate the correct offset of ten minutes ($AgeMinutes = $TimeToAdd*60+10).
    Which is used in this example of PS code, in the same script:
    $PSAlerts = Get-SCOMAlert -Criteria "Name='Power Shell Script failed to run' AND ResolutionState=0 AND MonitoringObjectPath LIKE 'MS0%.sc.local'" |where {$_.LastModified -le ($AgeMinutes)}

    And guess what? It works!

    The script itself
    Well, I am not going to share the PS script I used for this particular customer in order to close a certain set of Alerts, generated by certain Rules for a certain set of systems which are TEN MINUTES or older. Simply because I don’t want to share customer specific information.

    Instead I am going to share (or give back to the community) the PS script which uses the same mechanism as I already told about, which closes ALL Alerts generated by Rules, which are 48 hours old or older. So this is the PS script, also available for download from my OneDrive:
    #######################################################################################
    # Script to close old Rule based Alerts older than 48 hours
    # Based on copy & paste of different PS scripts & modified as required by Marnix Wolf
    # Keep the community spirit alive!
    # This script is based on SCOM 2012 R2 PS support & functionality
    #######################################################################################
    #Import SCOM 2012 Module
    Import-Module OperationsManager
    New-SCManagementGroupConnection -ComputerName [NAME OF SCOM 2012x MANAGEMENT SERVER]
    #Add 2 days (48 hours) and reckon with UTC difference & Day Light Savings
    $CurrentDate = Get-Date
    $UTC = $CurrentDate.ToUniversalTime()
    $Diff = $CurrentDate - $UTC
    $Conversion = [string]$Diff
    $Conversion = $Conversion.Replace(":00:00", "")
    [int]$TimeToAdd=$Conversion
    $AgeHours = $TimeToAdd+48
    #Identifies Alerts generated by Rules which are older than 2 days (48 hrs) & closes them with adding a comment.
    $PSAlerts = Get-SCOMAlert -Criteria "ResolutionState <> 255 AND IsMonitorAlert='False'" |where {$_.TimeAdded -le (Get-Date).addhours(-$AgeHours)}
    If ($PSAlerts -eq $null) {write "No Rule based alerts found which are 48 hours old or older."}
    Else {
    foreach ($PSAlert in $PSAlerts)
    {
    Set-SCOMAlert -Alert $PSAlert -ResolutionState 255 -Comment "Alert Autoclosed by PS script. Alert is Rule based and 48 hours or older."
    }
    }
    #End of script

    In this script all non closed Alerts are ‘touched’ by this script, meaning all Alerts which don’t have 255 as Resolution State (= Closed). When you don’t want that but to close ONLY Alerts which are New (ResolutionState 0) modify the yellow highlighted code in the script above to:
    ResolutionState=0 

    #End of posting Smile.

    System Center Technical Preview 2 VHD’s For Download

    Yesterday Microsoft published VHD’s for the System Center Technical Preview 2 products:

    1. System Center Technical Preview 2 Operations Manager – Evaluation (VHD)
    2. System Center Technical Preview 2 Orchestrator – Evaluation (VHD)
    3. System Center Technical Preview 2 Virtual Machine Manager – Evaluation (VHD)
    4. System Center Technical Preview 2 Data Protection Manager – Evaluation (VHD)
    5. System Center Technical Preview 2 Service Manager – Evaluation (VHD)

    At this moment there is – as far as I know – no TP2 SCCM Evaluation VHD available. But feel free to send a comment when I am wrong Smile.

    SCCM 2012 R2: New Updates Are Out Or Coming Soon

    Yesterday Microsoft released Cumulative Update 5 for SCCM 2012 R2: https://support.microsoft.com/en-us/kb/3054451.

    However, as it turns out, NEXT week Microsoft will release Service Pack 1 (aka SP2?) for SCCM 2012 R2 as well. Johan Arwidmark tweeted about it:
    clip_image001

    And when it comes down to SCCM, Johan is one of those rare persons who knows a LOT about it. So I totally believe him.

    As goes with any update, I always advice my customers to wait a while before rolling them out, since QC isn’t that good any more. So TEST yourself before you WRECK yourself Smile.

    Wednesday, May 6, 2015

    PowerShell One Liner: Show All Open Alerts Sorted On Highest Count By Name

    Issue
    The SCOM 2012x Console isn’t really fast. And sometimes (or even more than that…) it misses certain functionality. Suppose you’ve got a whole bunch of open Alerts and you want to know what Alerts (based on their Names) are ‘fired’ the most.

    Even better, you would like to have an overview of it, where all Alerts are sorted based on their name with the highest count on top.

    Sadly this kind of basic functionality isn’t present in SCOM 2012x. Gladly this can be done easily in PowerShell with a single one liner (when you forget the line to load the OperationsManager Module that is).

    Solution
    Run this on a box which has the SCOM 2012x Console installed with the SCOM 2012x PS Module:

    Import-Module OperationsManager
    New-SCManagementGroupConnection -ComputerName [NAME OF SCOM 2012x MANAGEMENT SERVER]
    Get-SCOMAlert -Criteria "ResolutionState <> '255'" | Sort Name | Group Name | Sort Count -Descending | Out-GridView

    And this is what you get:
    image


    Recap
    I know. This is very basic stuff but you would be surprised how many people don’t know this nice one liner.


    Additional Comment
    Out-GridView only works when PowerShell ISE is installed. Many times the PS ISE isn’t present on servers but on workstations instead.

    Tuesday, May 5, 2015

    Extreme Makeover: Blog Refresh

    For some time already I was looking for a new look & feel for my blog. And when OMS became GA I figured it was the right time for it. Time for something new with a more fresh and updated look & feel compared to previous dark lay-out which has served it’s purpose for the last 3 years.
    clip_image002

    So this is my revamped and refreshed blog. I am very happy and excited about it.

    Hopefully you all think the same. Don’t hesitate to let me know what you think about it. Just comment on this posting and I’ll get in touch with you.

    Say Hello To Microsoft Operations Management Suite (OMS)

    Yesterday at the start of Ignite Azure Operational Insights (AOI) went General Available (GA). On itself not unexpected since many people already anticipated this to happen on that particular date.
    However, another move might come as a surprise: AOI isn’t a new service which stands on it’s own but is part of a whole suite  titled Microsoft Operations Management Suite (OMS).
    What OMS is and does? Taken directly from the System Center: Operations Manager Engineering Blog posting: ‘…(OMS) provides unified IT management for any enterprise while taking the hassle out of managing and maintaining infrastructure. It brings together a collection of IT operation solutions, including log analytics, automation, recovery and security, which will help simplify management of your datacenter assets wherever they are – in your own datacenters or in public clouds like AWS or Azure…’
    What does this all mean for me?Good question! My guess is you were testing AOI by using the URL https://preview.opinsights.azure.com/. For starters, that URL is gone. You’re redirected to http://www.microsoft.com/OMS instead where you’ll see a whole new presentation:
    image
    As you see, you can try it for free. However, since I already have an account (xyz@outlook.com) I use the Sign in button. The moment I entered my private e-mail address (based on an outlook.com account) I was redirected to another page since that’s not an official enterprise e-mail account.
    Within a minute I had an overview of ‘all’ the OMS workspaces I manage from that account:
    image
    Of course I choose for my HomeLab and now I am asked what Azure subscription I want to link to it. Since I am a MVP Microsoft has given me a Visual Studio Ultimate with MSDN subscription for free (THANK YOU Microsoft) so I link that with my OMS workspace:
    image
    Now I have to provide the correct e-mal address and confirm it:
    image
    The moment I hit the CONFRIM & CONTINUE button, a confirmation mail is sent to that address:
    image
    And indeed, the confirmation e-mail is received and confirmed by me:
    image
    The moment I hit the Confirm Now link in the e-mail a new webpage is opened and soon I see the revamped AOI interface:
    image
    As you can see, there is no trace anymore of AOI. The branding is now Microsoft Operations Management Suite (OMS).
    Say goodbye to the Intelligence Packs and hello to ‘Solutions’
    One thing always impresses me about Microsoft is the way they push their new services/products. When they’re in that mode, everything is scrutinized and revamped/rebranded in order to fit into the new strategy.
    So the name Intelligence Packs had to go as well. Perhaps (at first sight) it’s too techy/geeky?
    image
    Let’s take a deeper look at those Solutions. Perhaps that will tell us a bit more about the WHY behind that name change…
    As it turns out some new Solutions are added to the mix as well:
    image
    When you’re into Azure, or at least familiar with it you immediately recognize the yellow high lighted Solutions. These are Azure based services which aren’t ‘new’ but already available for some time now.
    And this the MAIN reason why Microsoft has ditched the name Intelligence Packs. Simply because it referred too much to SCOM like service, while Microsoft Operations Management Suite (OMS) is WAY MUCH more.
    Where SCOM is solely aimed at monitoring and thus a powerful but yet SINGLE IT operation solution, OMS is a COLLECTION of IT operation solutions! Besides monitoring (or connecting your on-prem SCOM MG to it), it delivers way much more.
    Therefore the name Intelligence Packs had to go and replaced by something with a broader scope.
    How about a glimpse into the future?
    As I see it, OMS will become the place holder where all Azure based services providing one ore more IT operation solutions will be grouped and presented as a whole package where you can activate the solutions you require at that moment.
    Since Microsoft is pushing Azure BIG time, it’s to be expected that OMS as we see it today is small compared to the one available in the near future. New solutions will be added where as existing ones will grow in their capabilities.