Monday, November 29, 2010

SCOM R2 Gateway Server not communicating with the SCOM Management Group: EventID 20070 on the GW server and EventID 20000 on the RMS

Normally when a SCOM Gateway is installed and all prereqs are met, things run like clock work. In the years that I work with SCOM I have installed many SCOM GWs, all without any real issues what so ever. And when something was amiss, it turned out to be something simple like a firewall blocking some traffic or an incorrect certificate or a missing certificate chain. With just a few mouse clicks, all was fine and life was good again.

Until last week that is. I bumped into a GW that wouldn’t work. AT ALL! I could reproduce it as well with another GW, installed in total different environment. Strangest thing was that another SCOM R2 GW server was already installed and fully functional. So what was happening? And more over, how to solve it?

The Situation:
The SCOM R2 GW is installed and everything is in place (certs, SCOM GW Approval Tool has been run, firewalls have been configured and the lot). So there is a connection from the GW to the MG.

However, the GW throws EventID 20070 with the message ‘…Check the event log on the server for the presence of 20000 events, indicating that the agents which are not approved are attempting to connect ’:

On the RMS side of things, EventID 20000 is shown, telling that the SCOM R2 GW tries to connect but isn’t recognized as part of this Management Group (A device which is not part of this management group has attempted to access this Health Service. Requesting Device Name : <GW SERVER NAME>…):

Things we tried:
Wow! We did many things in order to get it all up & running:

  1. Of course, we checked the firewalls, routers and switches;
  2. Even installed Network Monitor on the RMS;
  3. Renewed the certs on the GW side of it all, reinstalled the SCOM GW;
  4. Reran the GW Approval Tool many times;
  5. Flushed the Health Service State on the RMS and the MS which the GW should report to in order to get a fresh config file (~:\Program Files\System Center Operations Manager 2007\Health Service State\Connector Configuration Cache\<NAME OF MG>\OpsMgrConnector.Config.xml);
  6. Installed the SCOM GW on total new server;
  7. Renamed the SCOM GW to see whether the computer name was causing it all;
  8. Ran some verbose logging on the RMS, MS and GWs which only showed EventID 20000 happening and nothing more;
  9. Deleted the SCOM GW and its SITE entry from the SCOM DB, waited until they were groomed out and started all over totally CLEAN;
  10. Ran some good tracing on the firewalls involved as well, showing us the connection was closed by the RMS (EventID 20000).

All to no avail. Nothing solid came out of it.

So I installed a new SCOM GW in total different Forest. And experienced the same issue! And all that time, the GW server which was installed some weeks ago was running just fine.

Dive Dive!:
So it was time for a deep deep dive. We copied the file OpsMgrConnector.Config.xml of the RMS and MS to another location and started to take a deep dive into them. Soon we noticed a difference: the file from the RMS contained the Connector information for the fully functional GW server, while the MS didn’t.

That’s strange! Since that GW server was installed by me using the GW Approval Tool, telling SCOM that the GW server should report to the MS and not the RMS. So this entrance should be found in the file located on the MS, not the RMS! I checked my installation document for that particular environment and indeed, I referred to the MS, not the RMS….

Time to run a PS-cmdlet which shows to WHAT MS the GW server is primarily talking to: Get-GatewayManagementServer | where {$_.Name -like '< GW SERVER NAME>'} | Get-PrimaryManagementServer.

And the output really puzzled me: the functional GW Server wasn’t talking to the MS but the RMS. Also the people running the firewall (TMG) told me that ONLY the RMS was being published, not the MS!

Now it all hit home! Wow!

The Solution:
I stopped the Health Service on the problematic test GW server, removed the GW server from the SCOM R2 Console, reran the GW Approval Tool, this time I referred to the RMS as the Management Server, adjusted the registry on the GW server in order to reflect the RMS and not the MS and restarted the Health Service on the GW.


All was working now!

Did the same for the problematic production GW server and hit the jackpot there as well!

However, some additional work needs to be done but that will be planned for the days to come:

  1. Publish the MS instead of the RMS on the TMG;
  2. Reconfigure the GWs to talk to the MS and not the RMS (some simple PS-cmdlets will do the trick here);
  3. Adjust the registry entries on the GWs in order to reflect the changes.

Why? It is not good to have servers reporting to the RMS.

Yes, I am still puzzled. WHY does the first functional GW server talk to the RMS instead of the MS, while I have ran the GW Approval Tool in such a manner that it should talk to the MS? Got the screen dumps showing it. Really felt stupid and taken by surprise. Also learned a valuable lesson: How to troubleshoot SCOM R2…

While troubleshooting this issue many colleagues (Peer, Tim, Wim, Pieter-Jan and Maarten) tuned in. Also got some serious aid from the SCOM MVPs like Pete, Graham, Alexandre, Paul and Simon. Even KH assisted! A good experience it was!

Without their help, effort and time I would not have cracked it! Thank you guys! Much appreciated!

Friday, November 26, 2010

Updated SCOM R2 Core MP has been released

A few days ago the updated Core MP for SCOM R2 (version 6.1.7695.0) has been released by Microsoft.

Some enhancements have been made, among them (I won’t list them all here, for detailed information check out the screen dump made from the MP Guide):

  • A new Report which lists all Agents, Management Servers (RMS and MS) and Gateways, grouped by their current Health State;
  • A new rule which checks the validity of the Alert subscriptions;
  • WMI Monitors to be run on the systems where the Agents are installed;
  • Updated Product Knowledge.

This MP is becoming better and better every time. And has become THE SHOWCASE what a MP is all about. A job well done Microsoft!

Changes in this update are: (taken from the MP guide):

MP to be downloaded from here.

Thursday, November 25, 2010

SCOM vNext – Part III – Network Monitoring

Postings in the same series:
Part   IThe Next Generation of SCOM
Part   IIHolistic View of Application Health
Part  IVTopology Simplification, Pooling and Timeline

In the third posting of this series I will describe another new feature in OpsMgr vNext, Network Monitoring.

Until now network monitoring with SCOM (SP1 or R2) out of the box, is basic. When one requires better and deeper network monitoring additional (third party) MPs are needed. Some of them are commercial (Jalasoft or OpsLogix) another one is open source (xSNMP Suite). On top of it all, the SCOM component used for network monitoring isn’t very robust nor scalable either.

So Microsoft has rewritten this component completely for OpsMgr vNext. And they have done a good job! Let’s take a deeper look.

This isn’t just a slide with some marketing slogans. The new Network Monitoring module of OpsMgr vNext is based on these three pillars. And it really rocks!

Microsoft has made huge investments in order to help the infrastructure owners and application owners by providing enough information about the network so they know whether the issue they are experiencing is network related or not.

So this means the monitored servers will show their dependencies of the network devices as well? Good question! And the answer is YES! Aka Server To Switch Fabric. 360 app view!!!! Take a look here:
(Screen dump taken from video, so the quality isn’t that well.)

And here:
(Screen dump taken from video, so the quality isn’t that well.)

Isn’t that sweet? Again the 360 View is present here. A network device centric View can be used or a server centric View. Both will show the dependencies!

A cool demo was given during the session. With the unreleased beta (!) version of OpsMgr vNext the network of Tech-Ed Berlin 2010 was being monitored, based on the Read-Only SNMP string. Within 10 to 15 minutes the devices were discovered. (Primary C-device was used, interrogated the primary C-device and crawled the vicinity view, in order to pick up all the network devices connected to it).

Big change compared to today, where the community strings is a property of that object. In OpsMgr vNext it will store the community string as a RunAs Account. One may use multiple community strings in order to discover the network devices. Per network devices the available community strings will be used and store the one for the network device that work.

The Discovery Wizard for the network devices allows multiple filters and schedules as well. So it is very flexible aid. Some screen dumps:

(Screen dump taken from video, so the quality isn’t that well.)

C Devices:
(Screen dump taken from video, so the quality isn’t that well.)

Besides that, the Look & Feel are the same as for computers monitored by OpsMgr as we know it today. So for the network devices Health Explorer, Alert View, State View and the lot will be found in OpsMgr vNext:

And on top of it all, some good Dashboards are there as well:
(Screen dump taken from video, so the quality isn’t that well.)

(Screen dump taken from video, so the quality isn’t that well.)

(Screen dump taken from video, so the quality isn’t that well.) 

Also Summary Views are available:
(Screen dump taken from video, so the quality isn’t that well.) 

The same View as shown above, but now in the Web Console (based on Sliverlight):image
(Screen dump taken from video, so the quality isn’t that well.)

So this means it can be used in SharePoint as well.

This is one of new features of OpsMgr vNext that’s really awesome! Much has been said about how SCOM monitors network devices today. And Microsoft has listened to it, as demonstrated at Tech-Ed EMEA 2010. This new way of network monitoring, in conjunction with the dependencies and the cool dashboards, will make OpsMgr vNext ready for the next era of Monitoring. Can’t wait until OpsMgr vNext goes RTM!

The next and last posting in this series will be about the timeline and a total wrap up of the whole session presented at Tech-Ed.

Monday, November 22, 2010

Opalis 6.3 TechNet Library is now live!

For some days now the Opalis 6.3 TechNet Library is available.

It contains really good information so for everyone running Opalis, this site is really the place to go.image

Saturday, November 20, 2010

Opalis 6.3 announced RTM!

This information just got out: Opalis 6.3 has been released to manufacturing (RTM)!

Included in Opalis 6.3 are:

  • New Integration Packs for Configuration Manager (SCCM), Data Protection Manager (DPM), Service Manager (SCSM) and Virtual Machine Manager (SCVMM);
  • An updated Integration Pack for Operations Manager (SCOM) to support the Server 2008 platform;
  • Support for the Opalis infrastructure to run on the Server 2008 platforms.

Opalis 6.3 to be downloaded from here:

Adam Hall, the senior Technical Product Manager of Opalis, made some release-interviews with the engineering team:

Gp De Ciantis and the evolution of the Virtual Machine Manager Integration Pack

Jim Pitts and his role in the Service Manager and Data Protection Manager Integration Packs

Robert Hearn and the importance and focus of the Configuration Manager Integration Pack

Rich Halbert and Rehan Jaddi talk about the Opalis 6.3 release

Friday, November 19, 2010

Opalis Architecture and Workflow Deployment Docs

Charles Joy published two documents:
  • an EXAMPLE Opalis Architecture and
  • an EXAMPLE Workflow Deployment Process.

These two documents are UNOFFICIAL but very interesting none the less.

Documents to be found here.

Thursday, November 18, 2010

SCOM vNext – Part II – Holistic View of Application Health

Postings in the same series:
Part   IThe Next Generation of SCOM
Part III  -  Network Monitoring
Part  IV - Topology Simplification, Pooling and Timeline

In the second posting of this series I will describe one of the new exciting features in OpsMgr vNext, Holistic View of Application Health, aka 360 Degree View. Besides that I will take a deeper dive into some categories of questions coming from the customers base which drives Microsoft in order to build OpsMgr vNext.

As stated before, Microsoft cares about its customers AND listens.

When questions, remarks/comments about SCOM come in (via the Connect site for instance) these are categorized. Some of these categories are (with additional explanation):

  1. Simplicity
    Making it easier to use SCOM. Delegation for assigning the right roles to the right persons (deployment of Agents, making overrides, group creation etcetera) so not a single SCOM Admin becomes the bottleneck in order to administer a SCOM environment.

  2. Reliability
    The software (making up the SCOM environment )must be working and functional. So OpsMgr vNext will become even more robust.

  3. Personalization
    Different users will only see the information they need. Nothing more, nothing less. Like the CIO:

    Or the Service Owner:

    Or the Operator:

  4. Consistency
    All different data – whether coming from hardware MPs, application MPs or network devices – will show the same Views across all interfaces whether its the UI, Web Console or SharePoint. Of course these Views will reckon with the person using those Views. So the CIO will see different things compared to an Operator.

When all these categories are being put together (also with many others) all there is left is the ULTIMATE question/requirement from the customers running SCOM:

This will lower the TCO considerably. On top of it these items will also be reduced in order to bring down the TCO:

  • Time until you get value of the product (initial deployment of infrastructure until you find and fix the issues in your IT environment);
  • Maintenance of SCOM.

And these are not just words or marketing slang. The pre-beta demonstrated with Tech-Ed covers many of those categories even though it is still work in progress! Of course, many demonstrated features are under development and therefore subject to change before final release. But it ROCKS none the less! Wow!

During the earlier mentioned session these three area’s were covered:


Area 01: Holistic View of Application Health aka 360 Application Monitoring
Say what? Sounds a bit vague, not like real IT? Time for some explanation:

How about determining easily what is wrong with a monitored application in such a manner that also the end-user experience is taken into account? Not only the inside of the application is monitored (servers, hardware, OS, application and related services) but also the outside or how the end-user perceives it (like external transactions).

This means that an application is not only monitored from bottom to top (inside infrastructure up to the application itself) like hardware, network, server OS, infra services (like DHCP, DNS and AD) and related services (like IIS and SQL), but also from the end-user perspective: Can they log on to a commerce website, add items to their cart and check out? It is all about simulating the end-user experience.

The recent acquisition of AVIcode will certainly aid here (click here for the video recording and slide deck of that session), telling us where the problems are, down to a line of code in an application.

Some of the questions which will be answered with this kind of monitoring:

  • What is wrong with the application?
  • What must I do to troubleshoot it?
  • Who do I have to contact? (Like: The SQL department? The IIS department? The developers?)

OpsMgr vNext will exactly pinpoint the issue so troubleshooting will be fast and efficient. Enablers for this kind of Monitoring:

  • Simple dashboards enabling 360 Views (inside and outside);
  • Reliable information for pinpointing issues;
  • Personalized Views showing relevant information, driven by persona;
  • Consistent Views across multiple interfaces (SharePoint, Console, Web console).

Visualization of 360 Application Monitoring
Since one picture explains more than a thousand words, take a look at this picture with the different blocks of 360 Application Monitoring:


When all these blocks  are brought together you have a Holistic View of Application Health:


Of course, during the same session it was also demonstrated. Live that is! Want to see it? Click here for the video recording of that session and watch this time frame: 16:52 > 20:00.

Next posting in this series will be about Network Monitoring, another great feature of OpsMgr vNext.

Wednesday, November 17, 2010

All SCOM R2 Web Console configuration settings

17-11-2010 Update: Pete Zerger commended on this posting. He referred to an article posted by Michel Kamp on System Center Central, showing many more settings, several of which are totally valid yet undocumented.

Bumped into this blog posting on the Microsoft TechNet OpsMgr Forums: all configurable settings for the SCOM R2 Web Console, posted by Michael Pearson.

Want to know more? Go here.

Tuesday, November 16, 2010

Updated Version of the MOMClean Resource Kit Tool For OpsMgr R2 has been released

Some hours ago Microsoft released an updated version of the MOMClean Resource Kit Tool specially for SCOM R2. The old version does not work properly in conjunction with SCOM R2.

Want to know more? Go here.

SCOM vNext – Part I – The Next Generation of SCOM

Postings in the same series:
Part  II  – Holistic View of Application Health
Part III  – Network Monitoring

Part  IVTopology Simplification, Pooling and Timeline

Wow! This is something special. The Next Generation of SCOM will be hugely different compared to the current version, aka SCOM R2. In a series of blog postings I will write about it. The first posting in this series will be all about the background about how a new version of SCOM R2 comes to be and what drives Microsoft in order to develop such a new version. But before I continue I want you to know this:

Being a SCOM MVP is a real privilege, something which I do not take lightly. I have been told many things, all strictly under NDA (Non Disclosure Agreement). And I will not violate that in any kind of way. So this series of postings will not reveal anything new which hasn’t been shared by Microsoft with the public during events like Tech-Ed Berlin 2010. When I am in doubt I will first double check my posting with Microsoft before publishing it.

Another three things you should know (and enjoy as well!) are these:

First of all, the content of this posting is based on a session presented at Tech-Ed Berlin 2010: ‘MGT205: Introducing the Next Generation of System Center Operations Manager’.

As a non-Tech-Ed attendee one can view all video recordings of the Breakout Sessions presented at Tech-Ed Berlin 2010. And not just that! Also the slide decks are available. ALL FOR FREE! Buy yourself some German beers, have a Curry Wurst, start a recorded session and experience Tech-Ed Berlin 2010 as close as you can get!

For a general overview of all available sessions, go here. For the video recording of this particular session, go here.

Thirdly, know this:
Same will be for this series of postings. Which is totally understandable since OpsMgr vNext is work in progress. Depending on the input Microsoft gets from its customer base (among other things), some features will added and others will be skipped.

OK, that’s out of my system. Lets go back to the topic at hand. First of all, what name shall I use for the successor of SCOM R2? During Tech-Ed I heard Microsoft using different names for it, like:

  1. OpsMgr vNext;
  2. OpsMgr 10;
  3. OpsMgr 2012.

Since I personally think the name OpsMgr vNext has a nice ring to it, I will use this name when I refer to the next version of SCOM R2.

OK, having said this, lets start!

First of all, what drives Microsoft in order to build a new version of SCOM R2? It is not that they want to keep their employers busy, so lets start a new build. Key for a new version is input from the field, the organizations using SCOM R2 like the end-users. Also from the SCOM Community much input is delivered. Based on how many times a certain topic is mentioned, raised or rated, it will end up higher on the list of items to be added or changed in the next version of product.

Don’t think like ‘Duh! I live in Europe/Australia/Russia/Japan/China and Microsoft, where SCOM is being developed, build and tested in Redmond, US. So how do I ever get through?’. Perhaps in the old days this was viable but today it isn’t. For many products Microsoft has started a special website, the Connect Website.

Any kind of feedback or bug report which is filed there, will be sent to the teams of developers, responsible for the product. And for SCOM there is just such a website:

You only need a Windows Live ID (who hasn’t? When you use Windows Live Messenger you have such an ID). Go here, log on and let Microsoft know what you think about the product. You can also vote on feedback which is already submitted by others. And believe me when I tell you that Microsoft really cares and take feedback submitted on this website SERIOUSLY.

On top of it all, as time moves forward, so does the IT industry. So new features are required by the same industry. A good and shiny example here is the capability for monitoring Windows Azure, added with the deployment of CU#3 for SCOM R2. Of course, an additional MP is required but without CU#3 it won’t work.

So SCOM vNext will add many new features as well, covering requirements which did not exist when SCOM 2007 went RTM.

What it all comes down to is that Microsoft is putting much time, effort and energy in order to create a new version of SCOM R2 which closes the gap between what the product delivers and the requirements of the IT industry. Of course it isn’t possible to create a product which does it ALL. As time, budgets (which are huge, believe me) and resources allow, OpsMgr vNext will be ready for the future!

Next posting in this series will be about some new features of OpsMgr vNext, in conjunction with some major improvements. So stay tuned!

Monday, November 15, 2010

New KB article: "The remote server returned an error: (403) Forbidden" when running MCF.exe to test the connector framework in SCOM

A few days ago a new KB article has been released which describes the issue where a error is thrown when executing MCF.exe to test the System Center Operations Manager Connector Framework (OMCF) configuration.

Part of the Error message:
Unhandled Exception: System.ServiceModel.Security.MessageSecurityException: The HTTP request was forbidden with client authentication scheme 'Anonymous'. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden.
   at System.Net.HttpWebRequest.GetResponse()
   at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)
   --- End of inner exception stack trace ---

KB2461666 describes this issue and how to resolve it.

Sunday, November 14, 2010

New MP: Lync Server 2010 Monitoring Management Pack

For a few days a new MP has been released by Microsoft. This MP monitors Lync Server 2010, the successor of OCS 2007 (R2).

Taken directly from the website:

This MP looks like a huge improvement compared to the OCS 2007 R2 MP. I haven’t tested it yet, but when I do I will post about it.

Want to know more? Go here.

Friday, November 12, 2010

Tech-Ed Berlin 2010 – The Last Day – Day Five

Took the Herring Express to the Berliner Messe. I use the phrase Herring Express because the U-Bahn was loaded with Tech-Ed attendees PLUS all of their luggage. It looked a bit like the Japanese subway:

So I was happy to arrive at Messe Süd and drop off my luggage :). Even though there was only half a day to spent at Tech-Ed, there were three interesting sessions I attended:

Session 1:
Everything about Security and Compliance on BPOS and Office 365:

Moving to the Cloud isn’t only about technology. Other issues must be addressed properly as well in order to make it a success. Since regulations can be very complex Microsoft seeks for standardization in order to reduce the complexity. In order to achieve that Microsoft has launched the Online Risk Management Program which breaks everything down into these three silo’s:

Every silo was explained in more detail. A very interesting session it was.

Session 2:
This one was about going out and do some shopping :). Bought some gifts for my family back home. While I was waiting for the S-Bahn for bringing me to Charlottenburg (a huge shopping area in Berlin) I noticed this sign on the platform:

I bought what I needed and was back in time for lunch:

On my way back I saw the Germans preparing themselves for the closing of the conference:

What struck me most during my stay in Berlin is that the German people are really polite. No matter how crowded the trains are, they make way when people are getting out. When I did some shopping, the people staffing the stores were really helpful and friendly. Thank you Berlin!

Session 3:
Hmm. After all the sessions of the last days I got the ‘session-blues’ and skipped the LAST session of Tech-Ed 2010. Instead I visited the Expo and had met some fellow SCOM MVPs there. Talked with some sales representatives from different companies and that was the end of Tech-Ed 2010.

Total Recap:
The Cloud has landed! That’s all I can say. I won’t repeat myself here. Read my previous postings about Tech-Ed 2010. It is evident that Microsoft puts a lot of effort into it. The future will tell to what degree today’s customers base of Microsoft will adapt it.

It is good to see that the SC Suite is growing and getting more and more the enablers for the automated datacenters and the move towards the Cloud. SCOM vNext, SCCM vNext, SCVMM vNext and SCSM vNext, all planned for next year to go RTM. That’s what I call an ambitious agenda! And believe me when I am telling you that the change won’t be just the name of the products. No way! The real changes will be found under the hood,all targeted to deliver what the customers ask for and the future requires. Just wait and see.

It was good to have met some old friends and to make new ones as well. Good to have met many of the fellow MPVs and others like Justin Incarnato, Ryan O’Hara, Jason Buffington, David Mills, Peirong Liu, Rehan Jaddi, Vladimir Joanovic, Alex Fedotyev, Prabu Ambravaneswaran, Kevin Beares, Eric Battalio, William Jansen and Matthew White (to name just a few).

A week well spent. A good Tech-Ed it was.

Nothing to complain? Hmm. Not really no. Just some side-notes.

Would like to see that the goodies bag returns next year. The pouch they gave this year was a bit amusing. However, when I could choose between a Monday at Tech-Ed with some good sessions or a goodies bag ‘Old-Tech-Ed Style’, I would opt for the first one!

But I must say, ALL sessions I attended were of a high level. The speakers knew what they were talking about and gave some good demonstrations as well.

Thursday, November 11, 2010

Tech-Ed Berlin 2010 – Day Four

The last whole day of Tech-Ed. Tomorrow (Friday) there are only sessions planned for the morning. At 13:30 hours the conference will be closed. I started today later than usual, so it was a bit busy at Messe Süd:

Today started for me with staffing the OpsMgr booth on the TLC floor from 9:45 AM until 1:00 PM. David Allen joined me in order to cover the DPM side of things. Even though many people visited the OpsMgr/DPM booth I wasn’t that busy. Most questions were about DPM. So while David was very busy (thank you David) I had time to look around and have some good talks with other attendees.

Session 01:
The first session I attended was organized by Alexandere Verkinderen, a much respected SCOM MVP. He was in charge of a BOF (Birds of a Feather) session, all about System Center, titled: ‘Shoot your System Center questions!’. The people attending could ask any kind of SC related questions and get them answered. Alexandere was assisted by Maarten Goet and Simon Skinner, both of them SCOM MVPs as well. So enough experience and knowledge was present in order to cover just as many area’s of the SC Suite.

Even though not all too many attendees were present it was a good session since many questions were asked and even better, got answered. So it was time well spent and good to see how Alexandre ran the session. Respect!

I skipped a session or two in order to visit some booths at the Expo. I talked with some companies who build MPs for SCOM and/or connect with SCOM in order to create additional value. Good to know about the additional possibilities so I can inform my customers about it. Also good to see that many companies deem SCOM important enough to build additional MPs and software. This will most certainly help keeping the product in a good shape and alive as well.

Session 02:
This session was all about the X-Plat (Cross Platform) extensions and third-party monitoring options that integrate with System Center. The speaker of this session was Periong Liu, the Sr. Program Manager of the SC X-Plat & Interop.

Many demonstrations were given. First I was a bit skeptical since the demonstrations weren’t done live but existed out of a set screen dumps. However, the way she explained them and highlighted the most important things per screen dump, made it very clear and ruled out the Demo Devil, thus enabling her to run multiple good and spot on demonstrations.

I was impressed and learned a few things as well. Also how to approach demonstrations. Sometimes they can be done by a series of screendumps, as showcased during this presentation.

Session 03:
All about AVIcode, one of Microsoft’s recent acquisitions. Impressive it was. It is a showcase what a good MP can do for a SCOM environment. I haven’t seen many times MPs bringing the total SCOM environment to a whole new level of monitoring but AVIcode is one of them.
image image

The AVIcode MP is meant for monitoring applications based on .NET Framework. The strengths of this MP is based on two facts: the applications which need to be monitored by this MP don’t need to have their code changed AND the load this MP creates on a monitored application is between 3 and 5%!

Another very good thing about this MP is the data the MP collects when something goes wrong. It contains everything the developers require in order to debug the application. The Alert Context tab of the Alert shown in SCOM plays an important role here. AVIcode has extended that tab to a huge extend. Everything required for proper debugging is to be found there:

This session went way too fast. I really enjoyed it.

Another great day at Tech-Ed, old fashion style: all about the SC Suite. A day without the Cloud. Met some good friends and learned a lot today. Nice!

Wednesday, November 10, 2010

Tech-Ed Berlin 2010 – Day Three

Man! Next time when this event takes place in Berlin I am going to use roller skates! The place (Berliner Messe) is really huge and one has to cover many miles per day when attending all scheduled sessions. Besides that it can be a maze as well so sometimes it feels like running around in circles when trying to locate the location for the next session. The locations are divided among many satellite buildings and on different floors so sometimes it is a challenge to find it. A good recipe is just to follow the rest of the herd and hoping they are about to attend the same session as I have planned :).
Picture taken from a satellite building on the first level.

But still it is worth every single minute. Wouldn’t want to miss this at all.

Session 01:
The first session I attended was all about the Hyper-V Cloud related to the Fast Track. What Fast Track means is that organizations are enabled by Microsoft (and its partners like HP, Dell and IBM) to deploy a Private Cloud, based on Hyper-V technology, rather fast and at low costs by using Reference Architecture.

But what is the difference between Public and Private Clouds? This picture tells it all:

It was very interesting to hear that 80% of today’s deployed servers are used for P. On the other hand the number of P. servers is staying flat while the numbers of V. servers is growing annually by 40%. So very soon physical servers will be outnumbered by their virtualized counterparts.

The next step in virtualization is the Cloud whereas the Cloud can be looked upon as an automated environment where the end-user (organizations) are able to provision themselves the number and type of VMs by using the Self Service Portal. As a matter of fact, Microsoft just released Self-Service Portal 2.0, to be found here. What a coincidence! :)

With the Hyper-V Cloud Fast Track Microsoft targets at three main fields of interest:

  1. Deployment speed;
  2. Reducing risk;
  3. Flexibility and the freedom of choice.


The rest of the session was all about the three Architectural Principles and drilled deeper into each of them:

Also some video’s were shown from partners like IBM (about the Compute architecture component), Dell (about the Network architecture component)  and HP (about the Storage architecture component). It was interesting to see how these partners approached it from their own angle and yet integrated totally with the big picture.

The best news (from the SC perspective) is that the SC Suite is tightly integrated in this whole picture. To make it more clear I have added the related SC products in this screen dump:

So SC is not going to disappear when the Cloud becomes standard. No way. It is there from the beginning and will (at least this is what I think will happen) grow more and more into the whole Online Service as a much required component.

Last but not least the three different types of Hyper-V Cloud offerings were shown. But of course, this can change overtime since Microsoft is getting up to steam and changing its Cloud Service offerings in order to serve the market as good as they can:

The session was really great and the speakers knew their stuff all right. So it was time spent well.

Before I went to the second session I had an appointment with Ryan O’Hara, Senior Director Product Management 'Management & Security' of Microsoft. We talked about many things like the Cloud and how the SC Suite relates to that. A good, interesting and inspiring conversation it was. Good to meet some of the people who are driving the SC Suite.

Session 02:
This session was inter-active and all about Windows Intune.  Also very interesting to see how monitoring of Windows 7 based clients can be monitored from the Cloud.

Session 03:
All about the Dynamic Datacenter. The best thing was that all the ‘big’ guys from Microsoft were present (Adam Hall, Jason Buffington, Brjann Brekkan, Jeff Wettlaufer, Kenon Owens and Sean Christensen), thus representing all products of the SC Suite.

Since this was an interactive session, the people could ask questions and get them answered. Had some good laughs as well since they made some good fun while seriously answering the questions from the public.

Session 04:
All about SCOM R2 and what’s new, presented by Justin Incarnato, the SCOM MVP Lead for Microsoft. So I had to be there! Other SCOM MVPs attended the session as well. Enjoyed it big time. Very good it was. He is a good speaker and knew how to capture the audience attention.

Again, the Cloud is COMING and happening. Nothing to get scared off, but the effort, energy, resources, time and money Microsoft puts into it are massive. And it will affect every piece of software that Microsoft produces, sooner or later. Already today some of the beta’s of the SC Suite are being affected and delivering good tooling in order to manage, monitor, provision and support the Cloud. Next year, H2, when these beta’s go RTM, they will certainly be Cloud enabled, or even Cloud enablers.

And as time passes by, this will only expand, integrate and develop further until such an extend that the SC Suite will become a set of components residing in the very basis of the Cloud. At least that is the way as I see it, or better, think will happen.

Along side it will come a time where monitoring of the company owned IT environments, like on-premise datacenters, can be run from the Cloud as well. So monitoring will become in the years to come SaaS (Software as a Service).

IT can be looked upon as an ever changing landscape. Sometimes many years passes by without any major landslides taking place. The Cloud can be compared with a series of earth quakes which will change that landscape in a huge manner. The way we ‘do’ IT today will change significantly where mobility, functionality and availability will be key. IT as a service, just like electricity today.

But that also took years in order to grow to a level as we know it today. Likewise with the Cloud. Office 365 looks great and has some good promises for the future but lacks some much required features which the on-premise versions do, like voice communication. And yet Office 365 is already powerful in its current shape and form. Joined with the dedicated efforts of Microsoft, Office 365, and other Cloud based services, will grow up quickly until it is hard to differentiate between the on-premise and Cloud versions.

Of course this will not happen in one giant leap but in steps, some small and others big. First companies will move toward a mixed model where some parts will be Cloud based and others on-premise like one or more privately owned or rented datacenter(s). As years go by and the Cloud evolves more and more, there will come a day which will resembles today’s world with P and V, where V is getting the upper hand. Instead of the P/V mix it will be the on-premise vs. the Cloud mix. Of course, the storage of sensitive data, like R&D results, will be kept alongside in on-premise locations.

It is evident that the Cloud is on top of Microsoft's agenda. Tech-Ed showcases this since it is all about the Cloud and I cannot imagine a single session here at Tech-Ed which does NOT mention the Cloud in any kind of way. I am glad to be able to attend this Tech-Ed since it shows that much is going on and there is a huge shift taking place.

The outcome is yet unknown which makes it even more exiting.