Monday, June 19, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 03 – SCOrch

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM

In the third posting of this series I’ll write about how System Center Orchestrator (SCOrch) relate to Microsoft’s Mobile First – Cloud First strategy. And as stated in the end of the second posting of this series, SCOrch isn’t in a good shape at all…

Sure, there is SCOrch 2016. And YES, it has the Mainstream Support End Date set on the 11th of January 2022, just like the whole SC 2016 stack. Also the Integration Packs for SCOrch 2016 are available for download. So on the outside all seems to be just fine. SCOrch is alive and kicking!

But wait! Hold your horses! Because here the earlier mentioned iceberg comes into play. Time to take a look what’s UNDER the water line, outside the regular view…

Yikes! x86 (32-bit) ONLY…
The day 64-bit workloads were special are long gone. All important Microsoft products and services are 64-bit based. Meaning, x86 (32-bits) isn’t default anymore. None the less, SCOrch 2016 is still x86 based and there aren’t any plans at Microsoft to rewrite the code to x64.

Therefore SCOrch native PowerShell (PS) execution runs in a V2.0, 32-bits PowerShell session, causing all kinds of issues. Sure, there are workarounds for it, but still they are workarounds.

Even though SCOrch packs some serious power, the x86 limitation is something to reckon with.

The ‘Engine’ & the ‘Graphical Editor’
These are crucial parts of any automation tool, SCOrch included:

  1. The ‘Engine’ enables the automation tooling to ‘translate’ the defined activities as stated in the runbook (eg. running a script, stopping a service, creating a folder, etc etc). SCOrch runs it’s own runbook engine, using it’s own proprietary runbook format.

  2. The ‘Graphical Editor’ allows for a ‘drag & drop’ experience when creating new runbooks/workflows (Eg: When the printer spooler service stops, restart it. Wait 2 minutes, check the state of the spooler service. When started, close the related Alert. When still not running create a ticket and escalate it to right tiers).

    SCOrch brought this ‘drag & drop’ experience to a whole new level because it doesn’t require any scripting. Just drag & drop the required activities – from the loaded integration packs - to your ‘canvas’, connect them as required, apply filters/criteria and so on and be ‘done’ with it. Of course, good Runbook authoring is far more complicated, all I am trying to do here to share the basics of how it’s done. The gest of this is to say that even without any scripting skills, one can build advanced runbooks with SCOrch.

However, things have moved on. In today’s world many times the on-premise/data center based workloads are connected to the cloud. Whether we’re talking Azure IaaS/PaaS/SaaS or Office 365 for instance here. Whenever automating management of cloud based workloads, PS is a hard requirement, whether you like it or not.

The challenges
And here SCOrch has two serious issues/flaws:

  1. By default SCOrch PS execution runs in 32-bits PowerShell session, missing out many advanced PS features introduced in the x64 editions;
  2. By default the SCOrch engine isn’t PS based.

As such, there will always be a translation from the native SCOrch engine to PS. On top of it, there will be ALSO a translation form x86 to x64 and vice versa…

And as it goes with every translation, there will be a performance penalty. Even worse, the whole chain (SCOrch > AA/SMA > targets to hit with a runbook/workflow) becomes longer and therefore more vulnerable to (human) errors. So why not cut out the ‘middle man’ or in this particular case, SCOrch and start directly with PS? Because SMA and AA both use an identical runbook format based on Windows PowerShell Workflow, x64 based.

No more translation, neither from a proprietary runbook format, nor from x86 PS execution to x64. Nice!

Port SCOrch to x64 and native PS?
For sure. Microsoft could solve it all by rewriting SCOrch in such a way that it would run natively x64 and use the identical runbook format based on Windows PowerShell Workflow. However, Microsoft isn’t going to do that.

Already in 2014(!) Microsoft was pretty clear about the ‘future’ of SCOrch. In 2015 Microsoft published the SCOrch Migration Toolkit (still in beta?!). Around the same date Microsoft also released the SCOrch Integration Modules, being converted SCOrch Integration Packs, ready for import in AA. In 2016 Microsoft published a blog posting about how to use the previously mentioned tools and modules.

And that’s about all the efforts Microsoft aimed at SCOrch specifically… Instead Microsoft tries to push you to AA or (in some cases) SMA, when using WAP. For most people however, AA is the future (at least Microsoft hopes).

Verdict for SCOrch and it’s future
Yes. SCOrch 2016 is available. And it still packs a lot of power. BUT at the end of the day, SCOrch 2016 is dead in the water. Not too many efforts, budget nor resources are allocated to it. Only the bare minimum. Sure it has gotten the 2016 boiler plate AND the related Integration Packs (IPs) are updated to support the 2016 Windows Server workloads. But that’s it.

Nothing new coming out of that door. End of the line for SCOrch 2016 after the 11th of January 2022. Even the recent posting of Microsoft about the new delivery model for the System Center stack is pretty clear about SCOrch: Not a single word about it. Which is a statement on it self.

What to do?
When not using SCOrch, but using other System Center 2016 components of the stack: Think twice. Sure, you already got the licenses for it. But please keep in mind that every effort and investment for SCOrch must be doubled: One time to get it into SCOrch and the second time to get it out to another automation tooling, no matter what you choose for.

When using SCOrch already, it’s time to look for alternatives. Also look OUTSIDE the Microsoft boundaries please. POC the alternatives and look at the possibilities to export the SCOrch based runbooks to your alternative choices. Also test the connectivity with the cloud and on-premise/datacenter based workloads. And TEST and EXPERIENCE how the graphical editors are functioning, how easy they are to operate and last but not least, how easy it is to catch errors and act upon them. AA still has some challenges to address, like the easiness of operation and capturing errors…

Coming up next
In the fourth posting of this series I’ll write about SCDPM (System Center Data Protection Manager). See you all next time.

Friday, June 16, 2017

!!!Hot News!!! Frequent, Continuous Releases Coming For System Center!!!

Wow! For some time Microsoft told their clients that one day the SCCM release cycle, also known as Current Branch (CB), would come (in one form or another) to the rest of the System Center stack.

And FINALLY Microsoft has released more information about how the System Center stack is going to adapt to a faster release cadence.

In a nutshell, this is going to happen:

  1. Microsoft will be delivering features and enhancements on a faster cadence in the next year;
  2. Main focus here will be on the highest priority needs of Microsoft’s customers across System Center components;
  3. There will be releases TWICE per year, in allignment with the Windows Server semi-annual channel;
  4. A technical preview release is planned in the fall with the first production version available early next calendar year;
  5. There will be subsequent releases approximately every six months;
  6. These releases will be available to System Center customers with active Software Assurance;
  7. SCCM/ConfigMgr will continue to offer three releases per year.

In the first release wave the main focus will be on three SC components:

  1. SCOM(!);
  2. SCDPM;
  3. SCVMM.

Key areas of investment will be:

  1. Support for Windows Server & Linux;
  2. Enhanced performance, usability & reliability;
  3. Extensibility with Azure-based security & management services.

What’s in the pipeline for SCOM specifically?

  1. Expanded HTML5 dashboards (FINALLY!!!);
  2. Enhancements in performance & usability;
  3. More integrations with Azure services (eg. integration with Azure Insight & Analytics Service Map);
  4. Improved monitoring for Linux using a FluentD agent.

On top of it all, YOU can influence the upcoming releases! Therefore Microsoft encourages you to join the System Center Tech Community and UserVoice forums to provide your feedback and suggestions.

Go here to read the posting I got all this information from. A BIG thanks to Peter Daalmans who pointed this posting out to me.

For me this is THE sign that Microsoft has FINALLY decided about the future of the System Center stack, by delivering insight in how they’re going to execute on their previously made promisses to port the SC release cycle more to the Current Branch (CB) model.

As such I expect the end of the notation like SC 2016. It makes sense to introduce a new naming scheme, like YYMM. Example: System Center 1806, refers to the SC release of June 2018. As a result I expect that there will be a new support model as well, just like the one in place for SCCM/ConfigMgr CB.

For now Microsoft is silent about it but to me it looks like the next logical step in it all. It makes no sense to support the new release cadence like the current SC 2016 with a Mainstream Support End Date. Even for a company like Microsoft, it would cost far too much money and resources, better used elsewhere (read: Azure Smile).

None the less, this development is a huge step forward and makes the future of the SC stack much more brighter. For sure, it doesn’t have an eternal live expectation. It never had. But at least there is something of a roadmap. And yes, one day the SC stack will be fully incorporated into Azure, which makes sense as well. But at least for now, Microsoft has recognized the significance of the SC stack.

Wednesday, June 14, 2017

SCOM 2016 Must Haves

Good to know:
This posting is based on the power of the community since it advices MPs, Best Practices and so on, all publicly available for free, shared under the motto:  ‘Sharing is caring’. So all credits should go to the people who made this possible. This posting is nothing but a referral to all content mentioned in this posting.

Why this posting?
’SCOM 2016 is just a little bit more complex compared to Notepad’ I many times say to my customers. Just trying to get the message across that even though SCOM packs quite awesome monitoring power, it still needs attention and knowledge in order to get the most out of it.

Even with the cloud in general and OMS to be more specific, SCOM still deservers it own place and delivers ROI for the years to come. And NO, OMS isn’t SCOM! Enough about that, time to move on…

None the less, everything making SCOM 2016 more robust and/or easier to maintain is a welcome effort. And not just that, but should be used to the fullest extend.

Therefore this posting in which I try to point out the best MPs, fixes, workarounds, tweaks & tricks all aimed at making your life as a SCOM admin more easier. Since content comes and goes, this posting will be updated when required.

I’ve grouped the topics in various area’s, trying to make more accessible for you. There is much to share, so let’s start.

Ouch! If there is a SCOM component I really dislike it’s the SCOM WEB Console. Why? It’s too slow, STILL has Silverlight dependencies (yikes!) and misses out on a lot of functionality. As such it’s quite dysfunctional and quite likely to become a BoS (Blob of Software) instead of a many times used SCOM component… Therefore, most of the times I simply don’t install it Smile.

Still, a FUNCTIONAL SCOM Web Console would be great. And when done right, it could be used as a replacement for the SCOM GUI (SCOM Console). But what to use? And when there’s an alternative, for what price?

Stop searching! The SCOM Web Console (and even SCOM GUI) alternative is already there! And yes, it’s a commercial solution. But wait! It has a FREE version, titled Community Edition! It’s HTML5 driven, taps into BOTH SCOM SQL databases, enabling the user to consume both data in ONE screen. So can look at current operational data and cross reference it with data contained in the Data Warehouse!

And not just that, but it’s FAST as well! And I mean REALLY fast!

For many users this product has become a full replacement for BOTH SCOM Consoles. As a result the SCOM GUI is only used for SCOM maintenance by the SCOM admins. The consumption of SCOM data, state information and alerts however is mostly done by using the HTML5 Console.

Yes, I am talking about SquaredUp here. Go here to check it out. Click on pricing to see the available versions, ranging from FREE(!) to Enterprise Application Monitoring.

Oh, and while you’re at it, check out their new Visual Application Discovery & Analysis (VADA) proposition, enabling end users(!) to automatically map the application topologies they’re responsible for, all in the matter of minutes!

Advise: Download the CE version and be amazed about how FAST and good a SCOM Console can be!

02 – Automating SCOM maintenance & checks
I know. The name implies SCOM 2012. But guess what? SCOM 2016 is based on SCOM 2012 R2. As such the MP I am about to advice works just fine in SCOM 2016 environments as well.

Whenever you’re running SCOM 2016 I strongly advise you to import AND tune the OpsMgr 2012 Self Maintenance MP. It helps you to automate many things AND is capable of preventing SCOM MS servers being put into Maintenance Mode (MM). When that happens (and the MP is properly configured!), this MP will remove these SCOM MS servers from MM! Also it’s capable of exporting ALL MPs on a regular basis and keep an archive of these exports for just as many days you prefer.

Please know that ONLY importing this MP won’t do. It requires some tuning, otherwise nothing will happen. Gladly Tao Yang (the person who made this MP) provided a well written guide, explaining EVERYTHING! So RTFM is key here.

Advise: This MP is a MUST for any SCOM 2016 environment. Import and TUNE it.

03 – Prevent SCOM Health Service Restarts (on monitored Windows servers)
The name I am about to mention is of a person who has made SCOM a far more better product then it ever was. Without his efforts, time and investments SCOM would be far more of a challenge to master.

Yeah, I am talking about Kevin Holman. For anyone working with SCOM he doesn’t need any introduction. One of his postings is all about unnecessary restarts of the SCOM Health Service, the very heart of every SCOM Agent installed on any monitored Windows based system.

The same posting refers to TechNet Gallery containing a MP, addressing the causes of this nagging issue. Please RTFM his posting FIRST before importing the MP. As such you’ll differentiate yourself from the monkey in the zoo pushing a button in order to get a banana without ever understanding the mechanisms behind it…

Advise: Import this MP in EVERY SCOM 2016 environment you own.

04 – Registry tweaks for SCOM MS servers
And yes, he also wrote a posting about recommended registry tweaks for SCOM 2016 Management Servers. And YES, he also provided the commands in order to rollout those tweaks.

Again: RTFM first before applying them. Alternative: Press the button and be amazed when a banana appears out of thin air Smile

Advise: Make sure to run these registry tweaks on ALL your SCOM 2016 Management Servers.

05 – SQL RunAs Addendum MP
Like I already stated, we – the SCOM users – own one man in particular a lot of thanks, even when he doesn’t want to hear about it. So it’s the same person here as well we’re talking about.

Until now I haven’t seen any SCOM environment NOT monitoring SQL instances. The SQL MP delivers tons of good information and actionable Alerts on top of it. As such, the SQL MP is imported and configured. The latter WAS quite a challenge, all about making sure SCOM has enough permissions to monitor the SQL instances.

Luckily this difficulty is addressed with the SQL RunAs Addendum MP. Again RTFM! But when read, import the MP and be amazed! Sure, this MP came to be with the effort of many people, so a BIG word of thanks to all the people involved here.

Advice: IMPORT this MP and USE it! It makes your life much easier and saves you lots of time, to be used elsewhere.

06 – Agent Management Pack (MP)
Sure. When SCOM monitors something a Management Pack is required. Without it, NO monitoring. Period. But still, the SCOM Agent running on the monitored Windows Server is also crucial. So all available information on those very same SCOM Agents is welcome, combined with some smart tasks in order to triage or remedy common issues.

Therefore it’s too bad that SCOM out of the box, lacks many of those things. Sure the basics are covered, but that leaves a lot of ground uncovered.

Gladly, a community based MP solves this issue. Again RTFM first before importing this MP, to be found here.

Advice: RTFM, import this MP and soon you’ll find wondering yourself how you ever got along WITHOUT it.

07 – Enable proxy on SCOM Agents as default
Whenever SCOM wants to monitor workloads living outside the boundaries of a server (like SQL, AD and so on) it has to look ‘outside’ that same Windows server. By default the SCOM Agent isn’t allowed to do that, because of security reasons.

Sure, people can hack into anything. But to think that a hacker would impersonate a SCOM Health Service workload, is something else all together. Why? Well the moment a hacker is already that deep into your network changes are far more likely he/she will have found something far more lucrative AND easier to grasp.

None the less, by default the SCOM Agent proxy is disabled by default. Sure, you can enable the Agent Proxy with a scheduled script. But when you’re already applying that workaround (that’s what it is…), why not change the source instead and be done with it?

Go here and follow the advice and apply the scripts. From that moment on the SCOM Agent proxy is ENABLED by DEFAULT. Problem solved. Next!

Advice: Enable the SCOM Agent proxy and forget about it Smile.

08 – SCOM 2016 System Center Core Monitoring Fix
The System Center Core MP from SCOM 2016 (up to UR#3!) contains some issues, as stated by Lerun on TechNet Gallery: ‘…temporary fix for rules and monitors in the System Center Core Monitoring MP shipped with SCOM 2016 (UR3). Issues arise when using WinRM to extract WMI information for some configurations. The issue is reported to Microsoft, though until they make a fix this is the only workaround except from disabling them…’

RTFM his description and import the MP from TechNet Gallery.

Advice: Import this MP and forget about this issue.

09 – SCOM Health Check Report V3
Okay. This MP is written when SCOM 2016 was only a dream. But still this MP works with SCOM 2016. Again RFTM is required here. But again, the guide tells you all there is to know and to DO before importing this MP.

This MP gives you great insight into the health of your SCOM environment and is made by people I highly respect (Pete Zerger and Oskar Landman). Download the MP AND the guide from TechNet Gallery, RTFM the guide, do as stated in the guide, import the MP and be amazed about the tons of worthwhile insights you get.

Advice: Is the MP already in place? If not, please do so now Smile.

As you can see, for now there are 9(!) tweaks, advices, MPs and so on all enabling you to have a better life with SCOM 2016. Feel free to share your experiences, best practices, tweaks and so on.

When double checked, I’ll update this posting accordingly with your name as well of course!


Wednesday, May 24, 2017

System Center 2016 Update Rollup 3 Is Out

Yesterday Microsoft released Update Rollup 3 (UR#3) for System Center 2016. UR#3 contains a bunch of fixes for SCOM 2016 issues. KB4016126 contains the whole list of the fixes for SCOM 2016.

And YES, the earlier mentioned APM issue of the MMA crashing IIS Application Pools running under .NET Framework 2.0 is fixed with this UR!

Tuesday, May 23, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 02 – SCCM

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
03 – SCORch

In the second posting of this series I’ll write about how SCCM relates to Microsoft’s Mobile First – Cloud First strategy. Reason why I start with SCCM is because this component is quite special compared to the other System Center stack components. For a long time already it has it’s own space, even outside the regular SC stack. There is much to tell, so let’s start.

Big dollars
First of all SCCM is still BIG business for Microsoft. We all know that Microsoft makes a lot of money so when something is BIG business to them, think BIG as well Smile. Many enterprise customers use SCCM and not just some parts of it, but to it’s fullest extend. All this results into SCCM being one of Microsoft’s flagship product/service, thus getting proper funding and resource allocation, combined with a healthy and clear roadmap.

Even though SCCM still has System Center in it’s name, it’s being pushed outside the regular System Center stack more and more. And yes, I do see (and respect) the suspected reasons behind it all.

Current Branch (CB)
For some time now SCCM has introduced a new approach to software maintenance. As such SCCM no longer adheres to the well known ‘Mainstream Support & Extended Support’ end date model which is still in place for the other components of the System Center stack.

Instead SCCM is updated on an almost quarterly basis, meaning SCCM gets about 4(!) updates per year! Which is quite impressive. However, with this new approach new branding is required AND a new support model. Even for a company like Microsoft it’s undoable to support a plethora of SCCM versions for many years.

So instead of using a year branding like SCOM 2016, a new kind of boiler plate was invented, titled Current Branch (CB) release. Now the CB releases of SCCM are made up like SCCM YYMM. Some examples: SCCM 1610, SCCM 1702 and so on. So SCCM 1702 is the CB release of 2017 (17) and the second month (02) of that year.

And not just that, but there are even CB releases with a MONTHLY cycle. However, those CB releases are kept inside a small circle existing out of Microsoft itself and some special customers and SCCM MVPs. The details are unknown to me since Microsoft doesn’t talk much about it. Only CB releases which are deemed good and stable enough are pushed out to the public, which happens once per 4 months in general.

Sometimes some these ‘in between’ CB releases are made available under TP, Technical Preview. Not meant for production (nor supported!!!), but meant for testing. At this moment SCCM 1704 is TP.

Why CB?
There are plenty reasons for the CB approach, like supporting the latest version of Windows 10 which also adheres to a CB based release cycle. So whenever new functionality is introduced with the latest release of Windows 10, the most current CB release of SCCM supports it 100%.

Another reason is that customer feedback is incorporated many times faster, compared to the old approach where – if lucky – once per 1.5 years an update was released. Now instead, just a few months later customer requests and feedback are incorporated directly into the latest CB release.

And yes, there is also another reason…

CB and the cloud: SCCM as SaaS!
Sure, with every latest CB release of SCCM, you’ll notice that SCCM is tied more and more into the cloud. This doesn’t end with deeper integration with Windows Intune but also with Azure in general. So step by step SCCM is growing into a Software as a Service (SaaS) cloud delivery model.

And the proof of it is already there. Because updating SCCM can be quite a challenge. Microsoft has addressed this issue quite good and with every CB release the upgrade process and experience is improved even further.

Since CB saw the light, SCCM can be upgraded quite easily, all powered by Azure. Sure as a SCCM admin you still have some work to do, but the upgrade process has become quite solid and safe. Just follow the guide lines setout by SCCM itself, and you’ll be okay in most cases. No more Russian roulette here!

How about support for CB releases?
Good question. Like I already stated CB releases adhere to a new support model as well. And those new support models don’t last years like we see for the rest of the System Center stack, but MONTHS! Which is quite understandable. Instead of Mainstream / Extended Support, SCCM CB adheres to two so called Servicing Phases:

  1. Security & Critical Updates Servicing Phase;
  2. Security Updates Servicing Phase.

The names of the servicing phases are quite self explanatory so no need to repeat it here I hope Smile. The first servicing phase is aimed at the most current CB release publicly available, and second servicing phase is aimed at the CB-1 release, being the previous CB release before the most current CB release. 

How it works? Let’s take a look at today’s situation. SCCM 1702 is the most current CB release. As such it adheres to the first servicing phase (Security & Critical Updates). Meaning, it’s fully supported by Microsoft. Security and critical updates will be released for it.

SCCM 1610 is the CB-1 release now. So this CB release adheres to the second servicing phase (Security Updates). So this CB release doesn’t have Microsoft’s full support. Instead it will only receive security updates and that’s it.

Suppose a new SCCM CB release becomes publicly available, let’s say SCCM 1706. Everything will move one rung down the servicing phase ladder:

  • SCCM 1706 will adhere to the first servicing phase (Security & Critical Updates);
  • SCCM 1702 will adhere to the second servicing phase (Security Updates);
  • SCCM 1610 won’t be supported anymore.

Sure, it forces companies to follow the CB flow as much as possible. But with every new CB release life is made easier because SCCM is growing into SaaS, making the upgrade easier every time.

!!!Spoiler alert!!! CB isn’t just a boiler plate
Please keep this in the back of your mind – at least for this series of blog postings – CB is way much more than just a new boiler plate!

As you can see with SCCM, CB encompasses not only a whole new support model (aka Servicing Phases) but also the development cycle is totally different. The way customer feedback is being processed, and decided upon whether or not to incorporate it into a future CB release or not. The way SCCM is being tied more and more into the cloud, growing to a SaaS delivery model. How SCCM is upgraded from one CB to another.

And so on. And yes, introducing and maintaining and growing the CB model costs money and resources. Which are available for SCCM without any doubt. As you’ll see in the future postings of this series however, this kind of funding and resources is kind of different for the other components of the System Center stack.

Verdict for SCCM and it’s future
Without a doubt, the future for SCCM is okay. For sure more and more SCCM will be tied into the cloud. But that’s not bad at all. Also with every CB release SCCM will grow even more into a SaaS delivery model, enabling you the administrator to focus on the FUNCTIONALITY of SCCM instead of working hard to keep it just running…

SCCM adheres for a full 100% Microsoft’s Mobile First – Cloud First strategy. And not just that, but also enables it by the functionality it offers. So whenever you’re working with SCCM, rest assured.

Many changes are ahead for it, but SCCM is in it for the long run, stepping away more and more from the System Center stack as a whole and as such, creating it’s own space within the Microsoft cloud port folio and service offerings.

SCCM is safe and sound and will give you full ROI for many years to come. Simply keep up with the CB pace and you’ll be just fine.

Coming up next
In the third posting of this series I’ll write the epitaph for Orchestrator - SCOrch  (I am sorry to bring the bad news but why lie about it?). See you all next time.

Friday, May 19, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 01 – Kickoff

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
02 – SCCM
03 – SCORch

In this new series of blog postings I’ll write about the effect of Microsoft’s ‘Mobile First – Cloud First’ strategy on the System Center stack.

This posting is the first of this series.

‘Put your money where your mouth is’
This phrase is most certainly at play when looking at Microsoft’s ‘new’ Mobile First – Cloud First strategy. And not just that, Microsoft has given the phrase ‘Put your money where your mouth is’ a whole new dimension of depth and breadth. Simply because their investments in the cloud (Azure, Office 365, Windows Intune and so on) and everything related, are unprecedented.

Azure regions are added on almost quarterly basis, while a single Azure region requires a multi billion dollar investment. Azure on itself is growing on a weekly basis. New services are added whereas existing ones are modified or extended.

It’s quite safe to say that Microsoft’s Mobile First – Cloud First strategy isn’t marketing mumbo jumbo, but the real deal. Microsoft is changing from a software vendor to a service delivery provider with a global reach. On top of it all Microsoft is also capable of delivering the cloud to goverments, adhering to specific laws and regulations.

The speed of alle these changes is enormous. Like an oil tanker turning into a speed boat while changing course and direction. As such one could say that Microsoft is rebuilding itself from the ground up. Nothing is left untouched, even the foundations are rebuild or removed when deemed unnecessary.

As a direct result many well known Microsoft products are revamped. Especially products which originally had a strong on-premise focus, like Windows Server. Now these same products are far more easier to integrate with Azure based services. As such these products are growing into a more hybrid model, enabling customers to reap the benefits of both worlds: on-premise and the (public) cloud.

How about System Center?
For sure, this massive reinvention of how Microsoft does business is affecting the System Center stack as well. Many components of the System Center stack date from the so called ‘pre-cloud era’, the days when the cloud was nothing but a buzz word. Most workloads and enterprise environments were located in on-premise datacenters. Not much if anything at all was running in any cloud, whether public or private.

Mind you, this is outside SCVMM of course.

The source code of many System Center stack components still reflect that outdated approach. So when Microsoft would think about turning the System Center stack into a more hybrid solution, much of that source code would require serious rewrites. Without huge investments this can’t be done.

Why this new series of blog postings:
So this brings us to the main question on which this series of postings is based: Where does System Center fit into the new ‘Mobile First – Cloud First’ strategy? At this moment the System Center stack looks to be isolated compared to other Microsoft based solutions.

In this series of blog postings I’ll take a look per System Center component and how it relates to the new Microsoft. Also I’ll write about available Azure based alternatives (if any). The last posting of this series will be about the System Center stack as a whole and whether it still deserves a place in the brave new world, powered by Azure.

I can tell you, many things are happening with the System Center stack. Most of them in plain sight but some of them hidden from your direct line of sight. Just like an iceberg…

So stay tuned. In following articles of this series I’ll show you where and how to look in order to see the whole iceberg…

Tuesday, May 2, 2017

MP Authoring – Quick & Dirty – 05 – Q&A

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
00 – Introduction
01 – Overview
02 – Authoring the Template MP XML Code
03 - Example Using The Template MP XML Code
04 - Testing The Example MP

In the last posting of this series I’ll do an Q&A in order to answer questions and respond to feedback I’ve got while working on this series. Whenever you’re missing your questions/feedback, don’t hesitate and reach out to me, whether directly or by comment to this post.

Q01: Is the ‘Quick & Dirty’ approach only doable for ONE Class and a ‘single’ layered application/service?
A01: No, you can add as many Classes as you require. However, there are some things to reckon with:

  1. When monitoring a multi-layered application the ‘Quick & Dirty’ approach may be a way to address it.
  2. However, when there are more than 3 layers, it’s better to look for alternatives.
  3. When adding a new Class to the MP, don’t forget to add the Reverse Discovery and Group as well. The Group is required as the target for enabling the Discovery (which is disabled by default, hence REVERSE DiscoverySmile)


Q02: I see what you’re trying to achieve. None the less, I rather prefer to target my Discoveries at registry keys which are more specific to the application/service I author my MP for. Why use your method instead?
A02: For many IT shops authoring MPs is quite a challenge. Whether based on their current workload, available time, budget, resources and knowledge.

For environments like those custom MP authoring isn’t a fun thing at all. None the less, sometimes they have to deliver custom MPs of their own.

In situations like these many challenges of MP authoring need to be addressed. In my line of work I notice that many times buggy MPs are delivered, resulting in a bad SCOM experience. Many times the bugs in the MPs are based on badly designed Discoveries and poorly defined Classes.

By introducing a template for their MP XML code containing a predefined Class with a REVERSE Discovery these two main challenges are properly addressed. On top of it all, it enables IT shops to deliver quickly a custom monitoring solution with proper Classes, Discoveries and monitoring. And it’s far more easier to learn them this approach instead of taking the deep dive into the world of MP Authoring.

Sure, it’s always better to work with registry based Discoveries targeted at registry keys unique to the workload to be monitored. But for IT shops like that it’s better all together to stay away from the ‘Quick & Dirty’ approach.


Q03: Do I need to pay for Silect MP Studio in order to use your ‘Quick & Dirty’ approach?
A03: No you don’t. However, there is a small caveat to it. As long as your custom MP can cover the requirements with basic Monitors and Rules, the free version of MP Author will suffice. The FREE version of MP Author allows you to build these Monitors and Rules:

  1. Windows Database Monitor;
  2. Windows Event Monitor;
  3. Windows Performance Monitor;
  4. Windows Script Monitor;
  5. Windows Service Monitor;
  6. Windows Website Monitor;
  7. Windows Event/Alert Rule;
  8. Windows Performance Rule;
  9. Windows Script Performance Rule.

As you can see, an impressive list. The paid version (MP Author Professional) offers on top of the previous list these additional Monitors and Rules:

  1. Windows Process Monitor;
  2. SNMP Probe/Trap Monitor;
  3. Dependency Rollup Monitor;
  4. Aggregate Rollup Monitor;
  5. SNMP Probe Event Rule;
  6. SNMP Probe Performance Rule;
  7. SNMP Trap Event/Alert Rule.

So when requiring SNMP monitoring, you have to buy the Professional version.


Q04: I rather stick to the MP authoring tool released for SCOM 2007x. It’s still available and FREE as well. And it allows me to build any Monitor/Rule I need. Why change?
A04: With the introduction of SCOM 2012x, the MP Schema is changed as well. For multiple reasons, among them the extended monitoring capabilities of SCOM 2012x and later SCOM 2016.

In the MP authoring tools for SCOM 2007x, the new MP Schema isn’t supported. Nor are the new SCOM 2012x/2016 monitoring features. Sure, any SCOM 2007x MP using the old XML Schema will be converted to the new one. However, the SCOM 2007x MP Authoring tool can’t work with it.

As such, your MP development will suffer, sooner or later when using this outdated tool. Also has this tool a steep learning curve. In cases like this it’s better to master MP Author and move on to the paid version when required, or (when the proper licenses are in place) to move to VSAE.


Q05: I find it much of a coincidence that you post a whole series of MP Authoring using Silect MP Author and that a new  version of it is launched soon after that. And now you’re also presenting at MP University 2017!
A05: I wish I am part of such a scheme. Would make me earn loads of more money (duh!) Smile. But let’s put the joke behind us and give a serious answer.

At the moment I started to write this series I had no connections what so ever with Silect. None. So having them bringing out a new version of MP Author is pure coincidence. And also a pain because I had to screenshot many steps all over again…

None the less, because of this series of postings I got on the radar of Silect. As such they asked me whether I wanted to present a session at their MP University event. That’s all there is to it. Nothing more, nothing less.

And no, I have nothing to do with chemtrails, or other conspiracy theories. How much I would love to, I simply don’t have the time for it SmileSmileSmile 


Q06: Do you recommend VSAE over MP Author or vice versa?
A06: There is no one size fits all when it comes down to MP Authoring. Sure with MP Fragments VSAE enables you to author MPs in a very fast manner. But your company requires licenses for VS. When not in place, don’t use VSAE in a commercial setting since you’re in breach of the license agreement.

On top of it all, MP Author is very accessible tooling for non-developers. And with the latest update, MP Author Professional supports the usage of MP fragments as well!

Therefore, the choice is yours, based on your liking, background, requirements and available budget.

MP University 2017 - Agenda

As stated before, tomorrow on the 3rd of May Silect hosts their annual online SCOM Management Pack event, titled MP University 2017.

This online event is FREE and – when working with SCOM and OMS – worth attending. Simply because this event has a very impressive line up and agenda(*):

09:00 - 09:15 Introductions and Kick off
09:15 – 10:00 Silect: Management Pack basics, MP Author
10:00 – 11:30 Kevin Holman: VSAE, fragments, MP dev Q&A
12:15 – 13:00 Silect: fragment authoring and sharing using MP Author Pro / MP Studio
13:00 - 14:00 Brian Wren: OMS and Solution Packs
14:00 - 15:00 Marnix Wolf: MP Authoring "Quick and Dirty"
15:00 - 16:00 Bhaskar Swarna / Microsoft: SCOM 2016

(*: Time is set in EDT time, so depending on where you reside, the actual time may differ.)

For me Brian and Kevin are two BIG names in the SCOM/OMS world. And not just that, they know how to share their knowledge and experience. So I am honored to be part of this event and to present a session of my own.

I’ve been told the sessions will be recorded and made available. However, when attending this event in person there will be enough time for some good Q & A. So my advise is to attend this FREE online event as much as you can.

Presenting At MP University

Wh00t! I’ve got the honor to present at MP University 2017 the FREE 1 day event on Management Pack Authoring, SCOM 2016 and Microsoft OMS!

This event is organized by Silect and Microsoft and has an awesome lineup of experts, like Brian Wren and Kevin Holman! For me these two people have tons of knowledge of hard core MP Authoring (among other knowledge). So I am honored to be part of this lineup.

My session will be about MP Authoring – Quick & Dirty, the same topic I am already blogging about (last posting of this series is in the make).

The MP University 2017 event will be on the 3rd of May, 9AM to 4PM EDT (Amsterdam time: 3PM to 10PM). You need to register in order to attend. Again, this is a FREE event, packed with tons of rock solid information.

So whenever you’re working with SCOM, MP Authoring and/or OMS, this event is a MUST go!

Community MP: SCOM Agent Management Properties & Tasks

Some time ago Kevin Holman authored a MP as an example what SCOM can do for you. The funny thing is, I’ve put this MP on my list of community MPs which are a MUST have for any SCOM environment, just like the SCOM Health Reports and Tao’s OpsMgr 2012 Self Maintenance MP.

What Kevin’s MP does? Good question! The answer is easy: It simplifies the administration of your SCOM Agents in many ways. It does this by adding useful properties of the SCOM Agent and by adding useful tasks.

Properties added to the SCOM Agent and as such shown in the Console(*):

  1. The “real” agent version;
  2. The UR level of the agent;
  3. Any Management Groups that the agent belongs to;
  4. A check if PowerShell is installed and what version;
  5. OS Version and Name;
  6. Primary and Failover management servers;
  7. The default Agent Action account.

Tasks added to the SCOM Agent (or Console)(*):

  1. Computer Management;
  2. Create Test Event;
  3. Execute any PowerShell;
  4. Execute any Service Restart;
  5. Execute any Software from Share;
  6. Export Event Log;
  7. HealthService – Flush;
  8. HealthService – Restart;
  9. Management Group – ADD and Management Group – REMOVE;
  10. Ping – (Console Task);
  11. Remote Desktop – (Console Task).

(*: For descriptions what a certain property means or task precisely does, please visit this webpage on TechNet Gallery.)

As you can see, this is an IMPRESSIVE list with properties and tasks which should have been there by default. None the less, this FREE MP adds them, empowering you to run SCOM even more smoother.

Go here to download the MP for FREE and read about the MP in more detail.

Monday, April 24, 2017

Updated MP: ConfigMgr (SCCM) MP, For Hybrid Scenario’s

Recently Microsoft updated their ConfigMgr (SCCM) MP to version 5.00.8239.1009. With this update this MP is capable to monitor hybrid scenarios.

As stated by Microsoft: ‘…This update will extend the capabilities to monitor availability, performance, and health of the Microsoft Intune connector site system role for companies who integrate Configuration Manager and Microsoft Intune in a hybrid environment…’

MP can be downloaded from here.

Wednesday, April 19, 2017

Webinar ‘Is The System Center Stack Dead Or Not?’

On the 17th of May 2017 I’ll present a session for the Windows Management User Group Netherlands with the title: Is the System Center stack dead or not?

Session abstract:

‘…With Microsoft’s Cloud & Mobile First strategy, our world has changed drastically. Not a single Microsoft product is left untouched. The same goes for System Center stack.

Many of the components of this stack date from the pre-cloud era whereas other SC components were invented in order to enable/support the Private Cloud strategy, which on itself is already abandoned or at least outdated by now.

So where does it leave System Center? Is it dead? Or about to die? Is SC still worth the investment or is it time to abandon ship and head for the cloud?

And when so, what are the cloud based alternatives? Are they better/cheaper/easier to use? Is it possible to migrate on-premise workloads from the SC components to the cloud?

Whenever you ask yourself one or more of the above questions, join the webinar in order to get some decent answers. No marketing mumbo-jumbo but the ‘real deal’, experiences from the field. No cloud washing nor bashing!…’

Since it’s a webinar and I’ll present it in English, it’s easy to join. Want to know more? Go here.

Monday, April 10, 2017

MP Authoring – Quick & Dirty – 04 – Testing The Example MP

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it. 

Other postings in the same series:
00 – Introduction
01 – Overview
02 – Authoring the Template MP XML Code
03 - Example Using The Template MP XML Code
05 – Q&A

In this posting I’ll show you how to import the previous made custom MP (SAP.Logon.Server.xml) into SCOM and how to activate it by enabling the Discovery through an Override targeted against the Group (SAP Logon Server Group) contained in the same MP.

Again I’ve made several sections in order to keep it readable. There is much to tell, so let’s start!

Using the custom made MP
As stated in the previous posting, the custom MP is going to monitor a non-existing SAP Logon Server. Just as an example in order to show how easily a custom MP is created based on the template MP XML code, in conjunction with Notepad++ and MP Author.

01: Importing the MP and enabling the Discovery through an Override
Now you’re going to import the MP and enable the Discovery through an Override by using the Group present in the same MP.

Before you start:

TEST the MP in a SCOM TEST environment, before putting it into production!!! I don’t accept any responsibility for broken SCOM environments!!!

  1. Import the MP in your SCOM test environment. Importing the MP can also be done by MP Author or the regular way, by using the SCOM Console;

  2. After the MP is imported: In the SCOM Console (log on with SCOM Admin permissions!): Go to Authoring > Groups and search for Groups with SAP in the name. Select the SAP Logon Server Group and open it;

  3. Remove the Dynamic Member rule by using the Create/Edit Rules button.image_thumb37

  4. Go to the tab Explicit Members and hit the Add/Remove Objects button. In the Search for: field select Windows Server Operating System (since the Discovery is targeted against that Class!) > Search > select the related Windows Servers where SAP Logon Server is installed. In this example I select DC01 > Add.
    In the Selected objects field the DC01 is now shown > OK > OK. The modifications to the Group will be saved now.

    Please know that selecting the object(s) of the correct Class (Windows Server Operating System) is crucial here. Reason being that the Discovery is targeted against that very same Class. Adding object(s) of any other Class won’t work because the Discovery won’t land there. For example, adding DC01 object from the Windows Computer Class won’t make the Discovery work, nor adding the DC01 object of the Windows Server Computer Class.
    So please select the DC01 object of the Windows Server Operating System Class.
    <\End of Advice>

  5. In the SCOM Console in the menu bar go to Tools > Objects > click on Object Discoveries. Search for SAP and select the SAP Logon Server Discovery. In the Object Discovery Details screen, hit the View Knowledge link. Now the properties of the SAP Logon Server Discovery will be shown.

  6. Go to the tab Override > hit the Override button and select For a Group…:

  7. Search for SAP and select in the Matching objects field the SAP Logon Server Group:
    > OK

  8. Set an Override for Parameter Name Enabled by setting the Override Value to True. Since the Group is already contained in an Unsealed MP (SAP Logon Server), you can’t choose another Unsealed MP.
    > OK. The Override will be saved now, thus enabling the Discovery for the SAP Logon Server Class for the Group SAP Logon Server Group, containing Windows Server Operating System object DC01.

    In the next section you’re going to see whether all your hard work paid off!

02: Testing the MP
Now it’s finally time to see the results of your hard work!

  1. Within a 5 to 10 minutes you’ll see that DC01 is discovered as an object of the SAP Logon Server Class:

  2. It also has a State because of the Monitor you previously made, shown here in the Health Explorer of the SAP Logon Server object DC01:
    Let’s stop the SAP Logon Service (Spooler service actually Smile) on DC01 in order to see what happens.
    Spooler service on DC01 is stopped…

  3. Yes, it works!


    An Alert is also shown:

    Let’s start the Spooler service again in order to see what happens (Monitor should go back to a healthy state and the Alert should be closed automatically):

    And yes, the Alert is closed automatically:




  4. The Rules you made earlier work as well:



03 – Recap
As you can see, with the template MP XML code it’s a low effort to cook up a new MP in order to monitor custom workloads running on one or more Windows Servers.

I’ve taught may IT Pro’s this approach. Many of them are capable of authoring a custom MP based on the template MP XML code within an hour! Meaning within 60 minutes they have the monitoring up and running in SCOM!

The most crucial part here is to collect the information, like WHAT needs to be monitored and HOW? In 99% of the cases it’s pretty straight forward, thus viable for the ‘Quick & Dirty Approach’ this series is all about.

Next posting in this series
By using a Q&A I’ll share additional information about how to extend the usage of the ‘Quick & Direct Approach’, thus aiding you even more in your quest to get SCOM monitoring on par with the requirements of your organization.

See you all next time!

Friday, April 7, 2017

RANT: Ignite & Other Microsoft Events

A few weeks ago I attended the free Microsoft Tech Summit event in Amsterdam. It lasted two days in which I learned a great deal, most of it outside the sessions. Also it made me think about how the Microsoft event cycle is organized now and what’s lacking BIG time.

At the end of the day, these are just my thoughts/ideas. So there is no need to feel/think the same. Yet I am wondering whether I am the only one out there or perhaps there are people thinking the same about it.

So feel free to comment Smile.

The past – The pre-Ignite Era
Before Ignite there were the Tech-Ed events, organized in the US, Europe, Asia and Australia. Besides that there were more product/service related events as well, like MMS, events for partners and Exchange/Lync events.

Those events were quite accessible and contained good content, aimed at their respective audiences. Also because most of those events were organized in different regions (Tech-Ed events that is), those events were fairly easy to go to. No need to travel to the US, only when one wanted to go to MMS or the Exchange/Skype event for instance.

In most cases it was pretty easy to write a business case for it, thus being allowed to visit a Microsoft event.

Sadly, Microsoft decided to change things and Ignite was born…

The current situation: Ignite was born, and many other good events were killed
With the birth of Ignite, many good events like Tech-Ed, MMS, Exchange/Skype were killed. Because Ignite would bring it all together. A bigger and much more happier place, at least that’s what Microsoft aimed for (I guess).

So instead of events targeted at certain audiences, one big monster event was created targeted at everyone and everything. And instead of covering many different regions, Ignite is hosted in the US and Australia only.

Both locations are quite out of reach for many organizations residing outside the US and Australia. Resulting in Ignite selling out while not covering the request for information as covered for by events like Tech-Ed/MMS and others. More over, many people who normally attended the more regional events, were (and still are!) left out.

Meaning, Microsoft is missing out on tons of valuable feedback. But at the end of the day Ignite is a bigger event compared to VMware World and events organized by other competitors, and IMHO that’s what counts for Microsoft?

As another side effect, it turned out to become even harder to present at Ignite being a non-Microsoft employer, while in the past most of the best sessions were given by non-Microsoft employers. Also many sessions are following the Microsoft regime resulting in too much marketing mumbo jumbo.

No, I’ve never been to Ignite myself. But I’ve watched my share of recorded Ignite 2016/2015 sessions. And content wise, the overall quality has dropped significantly compared to the pre-Ignite era. Not only because of the Microsoft regime but also because EVERYTHING has to be crammed into one week covering ALL levels of expertise and all products/services. And there is only a fixed number of rooms per day available. Resulting in the dropping out of smaller sessions with highly specialized content aimed at a smaller number of IT specialists…

Trying to bridge the cap and failing…
None the less, many professionals started complaining about missing out on many things. As such events like Microsoft Tech Summit were born/rebranded/revived. Basically being nothing but a repeat of the previous Ignite, on a much smaller scale that is of course.

Back to how I experienced Microsoft Tech Summit
And yes, many sessions of Microsoft Tech Summit referred to Ignite 2016 shamelessly. Whether the title of the slide deck (using the code of Ignite) or the demo environments, bearing names like Ignite2016 and so on… Also other sessions were ‘enriched’ with commercials about Windows 10, the latest Surface computers and so on. As a result the level of the session dropped from an already low 200 to even less 100…

All this resulted in a failed event. The first day I simply walked out of the keynote (a total fail) and some sessions because they were total crap. A waste of my time. So instead I connected with old customers, friends and former colleagues. Which was also nice but no the main reason for attending that summit.

The first day there was only ONE really good session and another coming close to it. The second day there were only two good sessions, the rest being a shameless repeat with added commercials. During the event I spoke many peers, and to my relief they felt/experienced the same. So it’s not just me being picky!

Time for revamping Ignite
Sure, Ignite is the BIGGEST Microsoft event ever organized. And yes, every time it’s sold out! So when living in the Microsoft bubble in Redmond it’s easy to say Ignite is a success and I am full of sh#$!t SmileSmileSmile

But try to look at it from another perspective. Even though Ignite is sold out, it doesn’t mean the overall quality is on par with the pre-Ignite era. Also it doesn’t mean it covers all the need for information by all IT people, whether IT decision makers, developers, managers, pro’s and so on.

Could it be it’s sold out because it’s the ONLY event, sharing new information, no matter the lower level of quality of the sessions? Could it be that even more people AREN’T attending because Ignite is sold out so quick and/or they aren’t allowed to attend because it’s too far, thus too expensive?

Still there are regions with no Ignite, like Asia and Europe. So why not organize Ignite like events in Asia and Europe as well, bringing down the scale but improving the overall quality of the sessions by allowing more content specific sessions and allowing more non-Microsoft employers to present?

At the end of the day many people and organizations in IT are willing to pay for attending an Ignite like event. The main deal however is that the location of the event should be more local to the organization and the content of the sessions should be more on par with their demand.

When having more Ignite like events outside the US and Australia, it would be easier to allow for more sessions aimed at a smaller audiences with their own specific request for information. This would be a huge improvement to the overall quality of Ignite.

As a side effect, it would create more traction, thus bringing more people attending Ignite. When combined all these Ignite events would bring more people together than the current two Ignite events in the US and Australia.

Will this ever happen?
Sure! When we believe in it and keep on ASKING for it. Make yourself heard. Speak up! Let Microsoft HQ know what you think about the current situation. In the past they could organize multiple Tech-Ed events around the globe. So why not use that experience in order to organize multiple Ignite events around the globe?

At the end of the day, Azure is hosted not only in the US and Australia but also in many other regions. Ignite events should adhere to that situation Smile.

MP Authoring – Quick & Dirty – 03 – Example Using The Template MP XML Code

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it. 

Other postings in the same series:
00 – Introduction
01 – Overview
02 - Authoring The Template MP XML Code
04 - Testing The Example MP
05 – Q&A

In this posting I’ll show by example how easy to use the previous made template MP XML code is, when a custom MP is required in order to monitor specific workloads.

Non existing workload to be monitored, just for the sake of it…
For this example I’ve cooked up this non existing server: SAP Logon Server. It runs one Windows service (the Spooler service) which is rebranded into the SAP Logon Service for the sake of this example. Also it logs certain events which are crucial and must be monitored as well:

  • EventID 1201 of the OperationsManager eventlog, rebranded into SAP Logon Server Tokens Unloaded;
  • EventID 1210 of the OperationsManager eventlog, rebranded into SAP Logon Server Bad Response Received.

The server which will become the SAP Logon Server is the DC in my test lab, DC01.

I know, it’s all none existent (accept for the DC), but like stated before it’s an example.

Items to reckon with when authoring a custom MP
There are some issues to reckon with whenever you create a custom MP:

  1. You’ll need a proper name for the MP, adhering to the monitored workloads. In this case the name of the MP will become SAP Logon Server. Choose the name wisely because it’s going to have an impact on everything you’re going to do;

  2. When this MP is authored, imported into SCOM and the Discovery for the SAP Logon Server Class is set so the proper server(s) will be detected, we also REQUIRE a State. In SCOM states are ONLY set by Monitors. So we REQUIRE at LEAST one Monitor. Stateless Objects are a no go in SCOM, especially for your custom made MPs;

  3. As a rule of thumb: Use Monitors for services to be monitored, and Rules for triggering Alerts based on events logged in the Windows event logs. I know, with Rules there is a change of a potential alert storm, but that can be addressed easily with MP Author. And yes, I’ll tell you how.

Let’s author the MP to monitor the SAP Logon Server
There is much to tell, so let’s start. Again I’ve made multiple sections of all the steps to be taken because I want to get the message across in a decent manner.

Also I will share some additional tricks. Every trick is easy to identify since they’re all in blue text, start with the prefix <Trick> and end with the suffix <\End of Trick>.

01 – Copying the template MP XML code and modifying the names
For this you’ll need Notepad++ or the XML editor of your choice.

  1. Copy the template MP XML code file (custom.application.xml). A new file will be made in the same folder, titled Custom.Application - Copy.xml. Rename this file to SAP.Logon.Server.xml, please mind the DOTS (.):

  2. Open this file in Notepad++ and do a Search & Replace (<CTRL H>)on these 3 entries:
    Search for: Custom.Application. Replace with SAP.Logon.Server, mind the DOTS (.):


    Search for: Custom Application. Replace with SAP Logon Server, mind the SPACES ( );
    Search for: CustomApplication. Replace with SAPLogonServer, mind the LACKING spaces;

  3. Just to be sure all Custom Application entries are rebranded to SAP Logon Server, do a count (<CTRL H>) on the entry custom. There shouldn’t be any left. If there are, start over from Step 01.

  4. Save the modified XML and close Notepad++. As a result, the template MP XML code is now rebranded from Custom Application to SAP Logon Server. With MP Author you’ll add the Rules and Monitor as previously discussed.

02 – Adding the Monitor to the SAP Logon Server MP with MP Author
With MP Author you’ll build the required MP in order to monitor the SAP Logon Server specific workloads. Again, I won’t describe EVERY single step to be taken in MP Author, but highlight the most important ones.

  1. Open MP Author and open the previously saved XML file SAP.Logon.Server.xml. It might take some time for MP Author to load and open the file, but just be patient Smile;

  2. First you’re going to author the Monitor. Go to Monitors > New > Create New Service Monitor > select the option Manually enter service name without connecting to a computer (Advanced users only) > Next;

  3. In the Select service to monitor screen, select as Target SAP Logon Server Class (now you see that Section 01 paid off Smile) and enter the name of the service you want to monitor. In this example Spooler:
    > Next
    In order to get the right name of the Service you want to monitor with SCOM: Always use the Service Name as depicted in the service properties screen. As you can see, the Service Name for the Print Spooler service is Spooler, so you use that name in MP Author as well:
    <\End of Trick>

  4. Enter the proper information, like this:
    - Name: SAP.Logon.Service.Monitor (please mind the DOTs (.))
    - Display Name: SAP Logon Service Monitor (please mind the SPACES)
    - Description: Monitors whether the SAP Logon Service is running
    > Next

  5. When the service isn’t running I suppose it’s a Critical situation. So ascertain the Health State reflects that by setting it to Critical when the service isn’t running anymore:
    > Next

  6. In this screen you’ve to change many things:
    - Select Generate alerts for this monitor
    - The Alert will be generated when the state is changed to Error
    - Select Automatically resolved the alert when the monitor returns to a healthy state
    - Leave the Alert Name to the default selection. It won’t be shown in the SCOM Console Smile
    - Alert Display Name: SAP Logon Service isn't running!
    - Priority: High and Severity: Error
    - Description: SAP Logon Service  failure on $Target/Host/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$.
    Please see the alert context for details.

    > Next > Finish > Save the changes. Stay in MP Author please.

    Now you’ve added the Monitor! Great! In the next section you’ll add the two Rules in order to monitor for the Event IDs 1201 and 1210 in the Operations Manager event log.

Section 03: Adding the Rules to the SAP Logon Server MP with MP Author
Time to add the two Rules!

  1. In MP Author: Go to Rules > New > Create New Windows Event/Alert Rule > select the option Manually enter event log name without connecting to a computer (Advanced users only) > Next;

  2. In the Specify event rule data screen, enter the proper information, like this:
    - Event Log: Operations Manager
    - Event ID: 1201
    - Event Source: HealthService
    - Deselect the option to Collect Data
    > Next

    Sometimes it can be a challenge to find the right names of the event log and event source. With this trick however, you’ll be 100% sure that SCOM is using the correct information. As such the Rules will work right away without requiring deep troubleshooting.

    Open the event you want SCOM to monitor and go to the Detail tab. There are two entries you require:
    - Provider Name, which adheres to the Event Source,
    - Channel, which adheres to the Event Log name in MP Author.
    <\End of Trick>

  3. In the next screen the proper Class (MP Author refers to it as Target), SAP.Logon.Server.Class, should be selected by default. When not, correct it:
    > Next

  4. MP Author is a great tool but has some disadvantages like creating impossible names. This is on purpose since the names with the dots have to be unique. Otherwise the MP will be flawed. None the less, it’s Best Practice to makes those names smarter and still keep them unique. In this case I’ve modified the entries to this:
    - Name: SAP.Logon.Server.EventID.1201.Rule (please mind the DOTs (.))
    - Display Name: SAP Logon Server Tokens Unloaded (please mind the SPACES)
    - Description: Alerts when the SAP Logon Server has unloaded tokens
    > Next

  5. In the Specify alert for event rule screen you can tweak many things. Choices will be based on the requirement of your organization. As such, the settings I choose here are only an example, nothing more.
    - Alert Name: Used in the MP only, not shown in the Console. When it’s reasonable, leave it;
    - Alert Display Name: Used in the Console, so make it smart. In this case: SAP Logon Server has unloaded tokens.
    - Priority: High, Severity: Error;
    - Description: You can leave the default which will dump the event description in the Alert. In this case however I’ve chosen a default text without any parameters. The buttons on the right of the screen (Data, Target, Host and Group) can be used to add parameters to the description, making the Alert much more worthwhile to read. It’s up to you, simply experiment!
    When using Rules, there is always a change of an Alert Storm, meaning way too many Alerts come in for a single situation. Which is really bad. However, Alert suppression enables you to prevent that from happening. So when authoring Rules, based on events, always check how many events are created in a single time slot, like 10 minutes, an hour, half a day and a whole day.

    Based on that information you can add Alert suppression to the Rule you’re authoring, thus preventing potential Alert Storms. However, Alert suppression can also be unwanted when people want a single Alert every time the event takes place.

    None the less, hundreds of Alerts about the same issue within a day is way too much. So better to apply Alert Suppression and using the Repeat Count column in the Alert View in the SCOM Console. When using Alert Suppression, the Alert itself is stopped when it’s already triggered. Instead the Repeat Count counter is raised by a single instance.

    That way you’ve the best of both worlds: Not an Alert Storm and yet a perfect way to show how many times the situation triggering the Alert, took place. Just remember, the Repeat Count counter starts at zero (0) meaning the Alert took place 1 time Smile.

    Simply hit the Alert Suppression button and ‘play’ with the various options available. The available options for alert suppression do explain themselves.
    <\End of Trick>

    In this posting I don’t use Alert Suppression, so I skip it. > Next

  6. Interesting! By default any monitoring workload in SCOM runs 24/7. Sure you can change that but that’s quite a challenge. With MP Author however, it’s very easy to accomplish! Awesome! For this posting however, I don’t use any schedule so the Rule runs 24/7.
    Again, it’s up to the requirements of your organization what to choose. When not using the default (24/7), make sure to DOCUMENT it and COMMUNICATE it with the organization. Otherwise your next job could be flipping burgers…

    > Next > Finish > Save the MP.

  7. Now you’re going to author the Rule to monitor for Event ID 1210 in the Operations Manager event log, using Steps 01 to 06 of Section 03.

    Use this information when authoring the Rule:
    - Log Name: Operations Manager
    - Event ID: 1210
    - Event Source: HealthService

    - Name: SAP.Logon.Server.EventID.1210.Rule
    - Display Name: SAP Logon Server Bad Response Received
    - Description: Alerts when the SAP Logon Server has received bad responses

    - Alert Display Name: SAP Logon Server has received bad responses
    - Description: The SAP Logon Server has received bad responses! Check the Admin Console for more details.

  8. At the end of the Wizard you’ll be shown this summary screen of your settings and input:

Save the MP and close MP Author. Now the MP has the two Rules and the Monitor. Nice! Time to test it!

You can download the authored SAP Logon Server MP from here. Download it, open it in MP Author and check it out. Please know however that the Wizards in MP Author are a one-way street. Meaning when creating anything (like a Rule, Monitor, Class (Target), Discovery and so on) there is a Wizard. But once created, that Wizard isn’t available anymore.

None the less, you can compare the code with your own code. Enjoy!

The next posting in this series
In the next posting of this series I’ll describe how to use the MP you just made by importing it, enabling the Discovery through an Override aimed at the Group contained in the same MP. See you all next time!