Category: SCSM

When Configuration Manager Goes Bad! and How Cireson can Help.

Let me start by saying I Love Configuration Manager!

For those of you that don’t know, System Center Configuration Manager is now 25 years old. Brad Anderson recently blogged about it and even celebrated this milestone at Microsoft Ignite.

Personally, my love affair with ConfigMgr started when it was still SMS. (No not a text message service, but Server Management System) Over the years as the product has grown and become more and more powerful my infatuation with the product continued to increase and now it is an awesome tool that I can not imagine doing without.

Before SMS or ConfigMgr, admins would have to visit each machine for updates or for software installs, we had no clue what was installed on what machine and don’t even get me started on patch management.

Throughout the years more and more functionality has been added to the product to make it more efficient and to solve admin issues time and again including software deployment, Patch management, Operating System Deployment, Baseline configuration, inventory reporting, software metering and even anti virus!

However, There is one big issue with all this new found power…..  As someone famous once said:

With great power comes great responsibility!

With the power to deploy a single patch to many machines with just one click comes the potential for disaster of sending the wrong patch to the wrong machines. (or worse, the wrong Task Sequence).

Anyone that has been a ConfiMgr admin for any length of time has war stories of when the wrong advertisement was sent tot he wrong collection and business was impacted in some way. Many of these stories are small slow downs or minor interruptions in service but some are more like “Resume generating events”.

A very public example of this occurred in late July back in 2012. The Commonwealth Bank of Australia (The second largest bank in Australia) was effectively taken “Offline” and unable to open the doors of the majority of their 1,000 branches for trading due to a “Systems Outage”.

The official line from the bank at the time was “a problem with an internal software upgrade”. However, it was reported that “… 9,000 desktop PCs, hundreds of mid-range Windows servers (sources said as high as 490) and even iPads had been rendered unusable….”

Unofficially, a simple mistake by a ConfigMgr admin advertising an OSD Task Sequence  to the “All Systems” collection saw teller machines, AD servers and god knows what else, reboot and format the hard drive in preparation of installation of a new OS.

While there are no official numbers on the business cost to the bank or the cost of restoring the systems, I think we should all ask ourselves, “What would this type of impact cost your company?”

I don’t want to harp on this individual incident and break down the exact DNA of the outage, others have done this in the past. What I do want to do is talk about how we can make sure this does not happen to us, or at least minimise the potential risk.

How Can We Prevent ConfigMgr Disasters?

The biggest risk we have with ConfigMgr is the lack of control or granularity of security around deployments and limitations on what collections can be advertised to.

By default, all admins can send any package to any collection. Role Based Access Control (RBAC) within ConfigMgr does allow for some configuration of administration however it is not simple or straight forward to implement and has many limitations.

When an administrator deploys an OS Deployment task sequence to a collection with hundreds or thousands of  clients, ConfigMgr warns the admin that the action is a “High Risk” deployment and asks them to confirm the action. However, if the same admin sends patches or software updates to the same collection, no warning is given.

  • What if we could put warnings on ANY deployment type when sent to a collection containing large numbers of computers?
  • What if RBAC was more powerful and easier to use?
  • What if we could keep non-critical personnel out of the ConfigMgr console?
  • What if you could even add a bunch of support tools directly in to a single pane of glass?

Well that’s exactly what the Cireson True Control Center (TCC) does! 🙂

True Control Center is Cireson’s latest version of the Configuration Manager platform and allows organisations to control who sees and does what within Config Mgr all while making is super easy for them to come up to speed and learn so they can be more productive faster.

So lets take a look at each of the key points that Config Manager admins and Support Desk managers would be interested in:

Simple and Powerful RBAC

Using super simple RBAC rules it is possible to lock down what computers or users are visible to groups of users. This gives Config Manager admins the ability to limit what users can see and therefore the damage that can be inflicted if someone makes a mistake.

It also allows them to limit the number of applications that can be advertised and the number of computers that can be advertised to at one time. This removes the potential for an analyst to accidentally rebuild all your domain controllers to Windows 7. 🙂

Remote Manage Support Tools for Computers

True Control Center now introduces Remote Manage support tools that provide analysts with a wide range of simple tools to provide targeted and simple support to customers and computers all from within the browser.

Right clicking a computer and selecting Remote Manage provides a vast list of support tools including:

  • Basic Hardware information, including CPU, RAM, OS, Make and Manufacturer.
  • Process list and control. You can see and kill processes on the remote machine.
  • Services list and control. You can see and stop, start or restart services on the remote machine.
  • Client Actions and Logs. Support actions that allow analysts to trigger common support tools for client computers. Such as:
    • Remote Control
    • Client re-install
    • WMI repair
    • Remote PowerShell
    • and much more…..


Remote Manage Support Tools for Users

Quite often with Configuration Manager users in an environment are forgotten about. However, all the users in an AD domain are listed in Configuration Manger and are up to date. Wouldn’t it be great to introduce user tools to allow support actions such as Password Reset, Account unlock and Software Deployment?

Well now you can!
All from the one tool!


Audit Trail

A common security issue that is faced by organisations is how to audit who, internally, invoked specific actions. The most common example is resetting a users password. To allow support staff to reset passwords usually an organisation will grant users access to reset passwords via AD security then give the support staff access to AD Users and Computers. That user then has access to reset anyone’s user account and gain access to their account and there is no audit to show who did what when.

By using True Control Center to reset or unlock user accounts, there is a single service account that can unlock passwords and every time an account is unlocked or has it’s password reset, it event is logged against a specific user account that triggered it.

Simple and Intuitive User Interface

Any of the System Center products, while powerful, are complicated and to administer through a complex console interface. Many of the work-spaces and navigation nodes are not required by most staff and just add complexity and time to the learning of the solution.

True Control Center reduces complexity and removes the excess navigation menus that an average support representative would not require. This makes the time to benefit for analysts that are new to the tool very quick allowing them to be effective faster and with less confusion with the required learning curve.

Support Tool Integration

The nirvana of support tools for analysts is a “Single Pane Of Glass” that they can use to log calls, track and update calls, investigate and resolve calls and also report from.

In all my 20+ years of experience with ITSM tools, I can honestly say, I’ve NEVER seen an ITSM solution that even comes close to this goal……   until now.

With the recent release of v4.8.x of Cireson’s Analyst portal for System Center Service Manager, analysts now have access to all the regular ITSM goodness that the Analyst Portal provides, but now also access to the Remote Manage tools of True Control Center directly from any associated Computer CI!

  • No changing apps.
  • No need for multiple screens.
  • No need for copy and paste of machine names between apps.
  • All while being secure and audited.


But I don’t use System Center Service Manager, I hear you cry. (Why not? I ask…)
Don’t despair, The Truce Control Center functionality has a flexible API that you can use to create a custom integrated solution in to your ITSM tool of choice!

No Console App Required

Traditional use of the Configuration Manager console requires an analyst to install the Configuration Manager console on to their computer to administer or use the tools functionality. This locks the analyst to a specific workstations that they must return to or remote access to achieve even the most basic tasks.

True Control Center is a web based application and can therefore be accessed from anywhere including mobile devices and even outside the organisation. Analysts can trigger the required events from any browser without having the delay and effort of returning or remote accessing to their primary workstation.


True Control Center is an amazing tool that any organisation that runs Configuration manager should review. It quickly and easily delivers real world benefits to any analyst responsible for the configuration and health of end users and computers.

Reducing time-to-resolution is a constant goal for support organisations and the Cireson True Control Center solution delivers the tools to drive down the time and effort required to achieve the most common tasks all while ensuring security and the ability to audit activity.

Do your support team a favour and get an onsite trial organised today or even try it out in the online demo environment with no need to install a thing.


Custom Open Source Exchange Connector for SCSM

Since 2015 pretty much all of us who use System Center Service Manager (SCSM) have used the Microsoft Exchange Connector v3.1 to capture e-mails coming from end users and turn them in to Incidents. It works well and does what it says on the box…… But wouldn’t it be great if it did some other things?

What if it could merge replies to prevent multiple work items from being created, or work with encrypted e-mail systems,  or even use AI to predict the subject and auto search KB articles for the end user?

All this and be open source so we could customise it ourselves?

That would be special…

Well that’s exactly what one member of the Cireson Community did!

Adam Dzyacky took on this challenge and has now created an Open Source, Community driven, PowerShell coded Exchange Connector that not only preserves the functionality of the Microsoft Exchange Connector but adds additional functionality.

Recently I was lucky enough to sit down with the connectors creator, Adam Dzyacky, and ask him a bunch of questions about the product so I thought I’d write a blog post to share with you some questions and answers including what was the genesis of this product, what are its goals, what are its current abilities and how can people use it today….   FOR FREE!  🙂

Question: Where did the idea of this connector come from and what was your thought process behind creating this connector?

Answer: Several years ago when I first got involved with Service Manager and the Exchange Connector I was immediately confronted with a problem – the stock connector only processes a single message type (IPM.Note). As such, any other message type is simply ignored. Out of Office, Meeting requests, Signed/Encrypted messages…all of it.


But hope was not lost because with some PowerShell and SMA, one could create scheduled SMA jobs to pick up what the stock connector missed. It would certainly introduce a new level of administration, but once it’s automated the work is done. I thought to myself;

“Well at least I can curb this with PowerShell so I guess it isn’t that bad.”

But I couldn’t help but shake the feeling that I can’t be the only one who cares about those other message types.

Next, if it wasn’t some new message type I’d have to deal with it was how the connector worked when it came time to process even those basic emails. Employees replying within a current processing loop of the connector, to the same thread of a message would generate new and unique Work Items for every single reply instead of simply appending to a single Action Log for a single Work Item.

Since the connector isn’t real time and instead every runs every X minutes…well a lot can happen between runs of the connector! It’s an unpredictable behaviour that requires the team(s) charged with that initial filtering to do a lot of Work Item micro-management thus detracting from their actual work of Resolving Incidents and fulfilling Service Requests. That’s potentially a lot of duplicate Work Items to close in SCSM and no less to understand to ignore in reporting.

In this case, supplementary PowerShell and SMA job can’t solve this because the Work Items have already been created. The connector would need to be able to understand the concept of an email thread at the source before Work Items are updated.

The above are but the first of many issues I had with the stock connector. It’s not that it isn’t great at what it does, it’s just I wished I could change some of it.

But no matter how much I wished I could change it the Exchange Connector is a sealed, closed source, C# management pack. Even if you could address this at its source, not only would you need an understanding of the C# programming language but you’d also need an in depth understanding of the System Center SDKs.

Question: So what was your plan of attack to fix these issues?

Answer: In February of 2017 I finally had enough of what wasn’t possible and committed my requirements to OneNote.

  • Preserve all functionality of the stock connector
  • Introduce some kind of new functionality over the stock connector
  • Be modular to support new/changing processes
  • Be open source
  • No programming languages – need something more than just developers understand and could ultimately edit

Question: No programming languages? As an admin I love the thought of that. So what was the plan of attack?

Answer: So from here, the decision was straightforward. Build an Exchange Connector written entirely in PowerShell leveraging the widely used community PowerShell module that is SMLets.

On top of that, host on GitHub so that bugs can be tracked, features requested, and anyone can contribute.

If successful you’d be able to drop the stock Exchange Connector, improve performance on your workflow server (especially if you had multiple connector for multiple inboxes), optionally move the script into an SMA or Azure Automation RunBook, and of course introduce a host of new possibilities as the only limitation to new features would be PowerShell.

As per Tom Hendrick’s comment in the Cireson Community thread;

“Limitation and PowerShell do no often appear in the same sentence.”

Question: How long did it take you to write the initial version?

Answer: In what probably totals about three weeks of actual focused work – I had the first version done.

Question: Being Open Source means that anyone can contribute to it, but allowing people to contribute and finding people to contribute are two different things. Have you been able to garner support from others to help develop this solution?

Answer: Starting April 2017 I shared this with Tom Hendricks, Brian Wiest, Martin Blomgren, and Leigh Kilday who were gracious enough to provide their time to test and provide feedback for the first release published on GitHub later that month.

Question: So what exactly does it do? What are It’s features?

Answer: The connector has all of the regular features of the stock Exchange Connector plus new features that fall in to two categories:

  • People who are using SCSM by itself
  • People who are using SCSM with Cireson products

Features if you’re just using SCSM

More keywords

  • Change Requests
    • [hold]
    • [cancel]
    • [take]
  • Incident
    • [take]
    • [reactivate]
  • Problem
    • [take]
  • Service Request
    • [take]
    • [hold]
    • [acknowledge]
  • Manual Activity
    • [skipped]


Just throw [announcement] in your next email to Service Manager and as long as your part of a configurable AD group that’s defined an Announcement will get created in SCSM. Need to control the priority? Just add an additional #low or #high. Announcements default to normal priority otherwise. And yes, you can update announcement simple by keeping the [Work Item] in the subject.

Minimum File Attachment Size

No more signature graphics as attachments. Set a minimum like 25kb and your Work Items will get a whole lot cleaner.

Maximum File Attachment Size

Optionally enforce File Attachment Settings as defined in the Administration -> Settings pane of each Work Item type.

File Attachment “Attached By” Relationship

When the connector processes messages, the Sender will be marked as the “Attached By” relationship for attachments. This is useful when multiple parties are updating the same Work Item.

Review Activities without [approved] or [rejected]

Do your end users think someone is actually reading the Service Manager inbox so they respond with questions to RAs? Fret not because now comments that don’t contain a vote will get appended to the Action Log of the highest Parent Work Item

Vote on Behalf of AD Groups

Open up a whole new world of voting possibility!

Schedule Work Items

The Scheduled Start/End times of a Work Item can now be set by sending a Meeting request to Service Manager. No Work Item yet? Just like email, if a Work Item doesn’t exist to update a new one will be created only now those date fields will be set in addition to the Work Items creation.

Digitally Signed/Encrypted Messages

Leveraging the open source MimeKit project the connector can process digitally signed or encrypted emails just like regular mail.

SCOM Integration

Get the health of your [Distributed Apps] and their current Active Alerts.

#private replies

Want to keep the notes between analysts? Just throw in a #private in your message to SCSM and it’ll get marked as Private on the Action Log.

Merge Replies

No more duplicate Work Items because now when users Reply to an email that does not have a [Work Item] in the subject, Service Manager will identify the email thread they were in and update the one, true, correct Work Item.

Create Related Work Items on Closed Work Items

Sometimes employees send an email about a Closed Incident. Rather than turn a blind eye, a New Related Work item will get opened for them and copy information from the previous Work Item into the new one along with their recent comment.

Multiple Inboxes

Configured correctly, you can redirect several inboxes on Exchange to your single Service Manager inbox. On top of this, unique templates can and will still be applied based on the source inbox they were redirected from. Buh bye multiple connectors!

More Default Work Item Types

No reason to limit yourself. The connector can now be configured to created Change Requests or Problems by default. Great for vendors sending maintenance or analysts generating Problems.

Artificial Intelligence

Did you battle with classic Exchange Connector dilemma of “What should the default work item type be when people send in emails – Incident or Service Request?” Wouldn’t it be great if Service Manager could just decide whether or not it should create an IR or SR based on the Affected User’s perceived attitude? Thanks to Azure Cognitive Services, emails can now be run through Sentiment Analysis and based on the rating will dynamically create either a Service Request or Incident based on a minimum defined score as configured per organisation.

Features If You Are a Cireson Customer

Suggest Knowledge Articles

You can optionally enable the connector use the body of the email as a search query against one’s respective Cireson HTML KB. Once complete, the connector will send an HTML email back to the Affected User with suggested Knowledge Articles and hyperlinks to them.

Suggest Knowledge Articles

You can optionally enable the connector use the body of the email as a search query against one’s respective Cireson Service Catalog. Once complete, the connector will send an HTML email back to the Affected User with suggested Request Offering and hyperlinks to them.

Send Outlook Meeting

The connector supports the ability to create or update Work Items from Meeting Requests. This introduces a New Work Item task on the Cireson portal so you can further leverage this feature.


Just throw [announcement] in your next email to Service Manager and as long as your part of a configurable AD group that’s defined an Announcement will get created in the Cireson SCSM Portal. Who will see it? Simple – the Distro groups you included on your email message out! Need to control the priority? Just add an additional #low or #high. Announcements default to normal priority otherwise. And yes, you can update announcements simple by keeping the [Work Item] in the subject.

[take] Keyword Restrictions for Support Groups

Maybe you want to put some restrictions on who can [take] things. Leveraging the Cireson Web API this is now possible checking to see if the Sender is part of the Support Group the Work Item is currently assigned to.

Artificial Intelligence

Instead of using the entire email body to suggest Knowledge Articles or Request Offerings to the Affected User, Azure Cognitive Services will pick out the keywords of the message and use those words to drive suggestions. This results in more focused searches and faster processing times.

Question: WOW! That’s a lot. What’s next on the planning table and how can others join in the conversation?

Answer: A few that come to mind are things like creating Work Items on behalf of others through the connector, assigning to yourself on Create, and as GitHub community suggested – integrating with the Cireson Portal Watchlist feature. All of these can be found on the repo’s Issue page.

Speaking just for myself I’d say that since day 1 I’ve wanted some kind of AI integration and fortunately Azure Cognitive Services readily provides that through easily consumable APIs. While we have sentiment and keyword analysis in the current version, I think the more interesting topics are things like their using their Speech API to convert voicemails to Work Item descriptions or using LUIS to understand intent to drive specific actions within SCSM. But ultimately, just discussion at this point.

Question: How would someone get involved in contributing to the project if they wanted to?

Answer: All it takes is a GitHub account. After you sign up you can Fork the repository. This, in short, creates a duplicate SMLets Exchange Connector under your own account that you can edit and change how you see fit and submit requests to Merge back into the master repository if you want. Cireson Community member Roland Kind has done this to start building a version that makes use of the stock SCSM cmdlets if you prefer that module instead.

An account also gets you the ability to suggest features, post bugs, and join the conversation directly on the Issues page. Maybe you just want to be notified when there are changes? If you put a Watch on the repo you can get email notifications when changes occur. Or if you just want to show your support you can also Star the repository.


The new PowerShell based Open Source Exchange Connector is nothing short of AMAZING!

Thanks go to Adam Dzyacky and anyone else who has contributed to this solutions for all the hard work and dedication to get this solution up and running.

New features get added regularly and there is a vibrant and energetic group of contributors who keep it updated and supported. (Not sure I could say the same about the MS Exchange Connector offering – Last updated in 2015)

While some organisation may have issues with this solution being Open Source and not officially supported by a vendor, I personally think the benefits far outweigh the possible risks. Considering the time and effort we all spend micro managing the results of the out of the box connector this new solution will shave tens of hours per week in support effort.

Is Service Manager Dead? NO says Microsoft.

While working with customers to better map out their use of the Microsoft products that they are licensed for, the conversation always drifts to System Center Service Manager and Orchestrator because they are the two products I like talking about most. 🙂

One of the most common questions I get asked is “What’s the future of Service Manager and Orchestrator?”

This was always a hard question to answer because Microsoft have been rather tight lipped about the products and what their futures are…..  until now!

In a recent blog post, Chris Howie wrote about the SCSM Roadmap and future and mapped out exactly what is on the cards for the two beloved products.

In short, SCSM and Orchestrator (Along with Data Protection Manager, Virtual machine Manager and Operations Manager) will be moving to the same “Semi-Annual” release cycle as System Center Configuration Manager was more than 2 years ago.

Chris Howie put it perfectly:

Why is this important? By releasing these products more frequently, the rest of System Center can now leverage the development agility that Configuration Manager has – meaning additional features and fixes released more frequently. On the flip side of that, this means the roadmap fundamentally changes as well. If features and fixes are being released semi-annually, it makes sense that the next set of features have about the same visibility. This means that the days of 3 year roadmaps for any System Center product are gone.

What does this mean for you? System Center Service Manager and Orchestrator are still being developed and are part of this new release cycle along with the rest of System Center. Some semi-annual updates will only have fixes and some will have additional functionality. The features that get added to the entire suite each cycle will depend on customer demand and will be prioritized as such.  The products which receive enhancements will likely vary each time. All products are therefore still fully supported.

What you may have also missed is another post on the Microsoft Hybrid Cloud blog back in June 15th 2017. The Microsoft Windows Server Team wrote about this faster release cadence but only in general terms, but one cool item that was buried in this post was the fact that:

We also recently announced the ability to send incident data to Service Manager from Azure.

Now that’s cool.

The one thing that we can do as fans of System Center is to participate in the System Center Tech Community and UserVoice forums to provide feedback to the product teams to help influence what is release in the upcoming releases.

Please keep it coming Microsoft.

SCSM SLO’s 101

I’m frequently asked about SLO’s when I do consulting work and I realised that many people may not full understand how SLO’s work and the key pieces that have to be in place to not only get these to work as we expect but to do it efficiently so they do not adversely impact performance on our SCSM environment.

What is an SLO?

An SLO within ITIL is a contract or agreement negotiated between you as a service provider and your customer(s). An SLA describes the service and specifies your responsibilities that you will deliver to the customer. You might use a single SLA across several services or even customers, depending on your business model.

A simple example of an SLA might be that we agree to resolve a priority 1 rated incident in 4 hours.

A more complicated example might be that we agree to provide a 99.99% up time for a service.

What Components Make Up an SLO within SCSM?

To create an SLO within SCSM we need four components:

  1. A metric to measure
  2. A Queue to apply it to
  3. A calendar that defines our “Work Hours”
  4. A time set against the metric

Creating a Metric in SCSM

A metric, within SCSM, is defining any two properties that can have time difference between them.

For example: The Creation time and Resolution time of an Incident or Service Request.

The Metric is used as the point of measure for the workflow to use when displaying or reacting to a warning or breach event.

Out of all the SLO’s I’ve seen, the most common two are IR First Contact and IR Resolution.

Creating a Queue in SCSM

Not all SLO’s apply to all Work Items.

To limit what SLO’s apply to what Work Items, we need to group together a bunch of Work Items that we want to apply the SLO to.

Creating a Queue is a way of being about to group together a given type of work item based on a criteria that you choose.

Common examples used for Queues are:

  • Priority based queues (P1, P2, P3 etc.)
  • Category based queues (Server, Desktop, Network etc.)

The most critical thing to watch when creating Queues is to ensure you select a class that has the minimum number o relationships your required to achieve your goal. Selecting the “Incident (Advanced)” combination class for all Incident based Queues is the leading cause of SCSM slowdowns that I have seen.

Creating a Calendar in SCSM

The calendar is used to ensure that the SLO is only calculated when support staff are at work and not over weekend or overnight. (If you don’t work in a 24×7 organization)

You can have multiple calendars if you have different support groups working different hours, but for most organizations there is a single support schedule that the entire team works to.

Creating an SLO in SCSM

To create an SLO you have to have all of the perquisites created and available.

The SLO is then just a case of selecting the time to set against the metric type and applying it to a given queue.

Within the SLO creation wizard you will be asked for both a warning time and breach time.

Warning time triggers an event at a given time before the SLO breaches allowing you to have an e-mail sent to the relevant parties to give them fair warning that the Work Item needs to be worked on.

Breach time triggers an event at the time of the breach and can be used to notify management or an escalation team if required.

How to (and how not to) Use SLO’s in Day-to-Day Operations?

In this authors opinion, for MOST organizations, SLO’s are not required and provide nothing more than a false sense of security in reports and a great source of anxiety for support staff.

I only advise customers to implement SLO’s if they have strict, contractually binding service levels that they must achieve under penalty of contract breach or financial fine.

If your organization wishes to use the SLO’s purely as a reporting measure after the fact, then I suggest you use some advance reporting features to tease this information out of the data after the fact rather than placing the stress of the SLO clock on the support staff.

In a future post I will also offer an opinion on why I believe SLO’s for most organizations are terrible and should be killed with fire……   But that’s another post 😉

Hidden SCSM Console Shortcuts

After working with SCSM for 6 years now, I thought that there was pretty much no new surprises left for me within this product that, lets face it, gets new features about as often as politicians do something right.

So it was with much celebration and rejoicing that I was informed of a hidden trick with in the SCSM Console that we all love to hate.

A good friend and fellow SCSM tragic Shayne Ray contacted me today to share what he found.

While doing some work jumping from the SCSM Console to the Cireson Service Manager portal, Shayne hit Ctrl+F5 to refresh the browser however, the focus at the time was on the SCSM console and he found something remarkable. A quick search around the interwebs finds a few mentions of it from others but nothing official from Microsoft, so I thought I’d do a quick write up of it all.

While in the console, any location, if the analyst hits any of the following combination of keys, the following actions are invoked:

  • Ctrl+F1 – Opens a new default Incident form
  • Ctrl+F2 – Opens a new Incident from a template
  • Ctrl+F3 – Opens a new Request Offering from a template
  • Ctrl+F4 – Opens a new Service Request from a template
  • Ctrl+F5 – Opens a new Change Request from a template
  • Ctrl+T – Hides or shows Tasks pane
  • Ctrl+F – Opens the Advanced Search window
  • Ctrl+D – Hides or Shows the Details Pane
  • Ctrl+1 – Selects the Administration Workspace
  • Ctrl+2 – Selects the Library Workspace
  • Ctrl+3 – Selects the Work Items Workspace
  • Ctrl+4 – Selects the Configuration Items Workspace
  • Ctrl+5 – Selects the Data Warehouse Workspace
  • Ctrl+6 – Selects the Reporting Workspace
  • Alt+F1 – Hides or Shows the Navigation pane

You learn something new every day! 🙂

Cireson Software Asset Management – Tracking Operating Systems

The question of tracking Operating Systems within the Cireson Asset Management solution came up the other day and I thought I’d put together a quick blog post to cover off why we would do this and more importantly how.

Why Track OS Versions in Asset Management?

First off, I think it is important to ask yourself why you would want to track Operating Systems within your organisation as it might not give you any useful metrics or data that would be useful in any way to us.

For example: If your organisation has an Enterprise Agreement with Microsoft that covers Windows for all of your PC’s then why do we need to report on it? If we know for sure that we are covered regardless of what version of the OS is used, then there is no useful reports that we can gain about licensing of OS’s.

However, we could get some reports about how our upgrades are going or if a particular threat is seen for a specific OS we could quickly report on what our exposure would be.

So the first thing that you really need to do is determine if it is worth tracking Operating Systems before investing time and effort in to setting these up.

How to Track OS Versions in Asset Management

If we have decided to track OS versions then we need to make sure we cover all OS’s that we want to track by creating Software Assets for each of the branches that we want to track.

For Example: If you are wanting to track just major versions (Windows 7, 8, 10) then it is possible to create a Software Asset for each of these without needing to go any lower level.

However, if you are trying to ensure workstations are up-to-date, then you will have to create a software asset for each SKU of Windows OS (e.g. Windows 10 Home, Windows 10 Enterprise)

Once all individual OS’s are tracked then I would also suggest creating two Software asset called “All Windows Desktop OS’s” and “All Windows Server OS’s”. These will have bundle rules for all of the OS’s so you can track licensing if you have a limited number of OS Licenses.

Below is a list of OS’s that could be tracked, but it would be up to the individual as to which ones to use.

Server OS’s

Microsoft Windows Server 2003 Enterprise Edition R2
Microsoft Windows Server 2003 Standard Edition
Microsoft Windows Server 2003 Standard Edition R2
Microsoft Windows Server 2003 Web Edition
Microsoft Windows Server 2008 Enterprise
Microsoft Windows Server 2008 R2 Enterprise
Microsoft Windows Server 2008 R2 Standard
Microsoft Windows Server 2008 Standard
Microsoft Windows Server 2012 Datacenter
Microsoft Windows Server 2012 R2 Datacenter
Microsoft Windows Server 2012 R2 Standard
Microsoft Windows Server 2012 Standard
Windows Server 2016 Datacenter
Windows Server 2016 Standard

Desktop OS’s

Microsoft Windows 10 Enterprise
Microsoft Windows 10 Pro
Microsoft Windows 7 Enterprise
Microsoft Windows 7 Professional
Microsoft Windows 7 Ultimate
Windows 7 Enterprise
Windows 7 Professional
Windows 7 Ultimate
Microsoft Windows 8 Enterprise
Microsoft Windows 8 Professional
Microsoft Windows 8.1 Enterprise
Microsoft Windows 8.1 Professional
Microsoft Windows Vista
Windows XP Professional

How to Enter OS Versions in Asset Management

Now all you have to do is enter these in the Cireson Asset Management and we are done right?

Not so fast.

We have a few options to play with here including an option that is “This is an OS”. Seems fairly obvious that we would select this right?

Not so much.

This option looks in a separate location of the ConfigMgr data instead of the Add or Remove Programs list, But the Windows OS is also recorded in the Add or Remove Programs list and can often have more detail, so it is better not to use this option.

Entering Software Assets one at a time can be a challenge and take a lot of time, so to make it easier, here is an Excel file filled with all the information you need to make this happen by importing via Cireson Asset Import, or Cireson Asset Excel.


Happy reporting.

How to use the Cireson Asset Import Connector

A little while ago on the Cireson Community Forum a member asked for more details on how the Cireson Asset Import Connector works. So I decided to write a blog post about it to clear up exactly what the connector is and how it works. I also recorded a short video for those of you who do not like long winded blog posts. You can find the video here.

The Cireson Asset Import Connector is one of the solutions contained within the Cireson Asset Management Stream of products and allows for Asset Administrators to take the guesswork out of importing external data into System Center Service Manager. This app allows any out-of-the-box CMDB data, or any information in the Cireson Asset Management app, to be imported from external CSV, SQL, ODBC or LDAP sources of truth, exposing an intuitive interface that provides the ability to map columns and schedule imports when required.

All little know pub quiz fact is that the Cireson Asset Import App grew from the CSV import app which was the very first Cireson app to hit the market. Next time this question comes up in a pub quiz, rest easy knowing that you now have the answer and are in a pub that is so cool it asks question like that one! 🙂

When you add the Cireson Asset Import app to a Service Manager environment, importing data becomes seamless. One-time imports and configuring XML files become a thing of the past. The straightforward app provides the organization with the ability to build an asset repository of information that is relevant and accurate when working with requests in Service Manager.

So lets get in to it… throughout the following post, I will call out important things to note and also what is generally regarded as “Best Practice” but always consider the requirements and impact these settings may have.

1. Creating a new Asset Import Connector

  1. Within the SCSM console, select the Administration workspace.
  2. Right click the Connectors Node.
  3. Select Create Connector from the drop down menu.
  4. Select Asset Management Import Connector from the sub menu.
 ami02 NOTE:

The sub menu option for Asset Management Import Connector (Import) is for creating pre-created or backed up Import Connectors.

Enter a name for the connector that will make sense to other administrators for future maintenance tasks.

Select a Management Pack (or create a new one) that will be used to contain the workflow information required for the workflow of the connector.

 ami04 Cireson Best Practice:

Best practice for creation of Management Packs is to create these Management Packs via the SCSM authoring tool and giving it an internal and full name in the format of “ – Asset management Import Connectors”.

This then assists to identify the Management Pack when exported or backed up at a later date.

The next step will be different depending on the input data source. Select and use one of the following sections below before continuing.

2. Using a CSV Source

After completing the steps in the section below, browse to the location of the .CSV file that contains the asset data to import and select the Encoding Format of the file.

The selected path can be either a local path (on the SCSM workflow server) or a network share that has read permissions by the Workflow account.

The first line of the CSV file must contain the header row information for the data contained within.

 ami04 Cireson Best Practice:

It is Cireson best practice to create a single folder that contains all the CSV import files for any connector that is being used. It is also best to configure the connectors to use a UNC path as the location path of the file selected as this allows the connector to be edited successfully from other computers.

 Continue the connector settings.

 3. Using a SQL Source

For Microsoft SQL Server data source:

Enter the SQL Connection string by clicking the ellipse button and entering the required connection information.

 ami02 NOTE:

If Windows Authentication is to be used, the SCSM Workflow account must have read access to the source database.

Enter the SQL query that will be used to extract the data required for this connector.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The SQL Query Results field will show the number of row returned if the query was successful.

Continue the connector settings.

4. Using a ODBC Source

For ODBC Server data source:

Create a File Data Source Name (DSN) that contains the Server, Database and username for the data source.

Browse the file system and select the File DSN.

 ami02 NOTE:

The SCSM Workflow account must have read access to the File DSN.

Enter the File DSN Password for the username within the File DSN.

Enter the SQL query that will be used to extract the data required for this connector.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The SQL Query Results field will show the number of row returned if the query was successful.

Continue the connector settings.

 5. Using an LDAP Source

For an LDAP data source:

Enter the LDAP Server or Namespace and the LDAP Port (If required).

If the SCSM Workflow account does not have read access to the LDAP source, enter alternative credentials with the required rights.

Enter the LDAP Attributes that are required to be returned separated by commas.

Enter an LDAP search starting path to reduce the search scope as required.

Enter any LDAP Filter needed to refine the results to the specific required data.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The LDAP Query Result field will show the number of row returned if the query was successful.

Continue the connector settings.

6. Connector Settings

Select the target class that the records will be imported in to. This might be one of the base classes (Such as Hardware Asset) or, if other relationships are required, selecting a combination class (Type Projection) that contains the relationships required for the import.

Enter a Workflow log path to track import results and reporting on success\failure.

Set the required options for the instance of the Asset Import connector. See below for more details on these options.

Once all options are selected, click Next.

Asset Import Connector Options:

Test Mode The connector will run and create log file for inspection without commiting any changes to the SCSM database.
This connector can create new items When enabled, this option will allow the connector to create new records within the database.

This is used to allow the import of new records.

This connector can update existing items When enabled, this option will allow the connector to update existing records that match the key fields the selected class.
This connector will DELETE ALL matching items only This option changes the behaviour from creation to deleting of records. Any record matched from the import data to an instance of the class will be removed from the SCSM database.

WARNING! If data is deleted it can not be recovered.

This connector will update multiple existing items matching specific custom keys
Do not replace \n with a linefeed By default, the improt connector will interperate any \n text as representing a new line and therefore will replcae it with a linefeed character within SQL.

7. Mapping Fields

Data Mappings allow the mapping of the specified input data to the properties of the selected target class within SCSM.

On the Data Mapping screen, if the option for “This connector will update multiple existing items matching apecific custom keys” is selected on the previous screen the first option that will show is for Custom Keys. Custom Keys are used to fins all existing matching items and update them as normal via the mappings below. At least one custom key is required.

The Custom Key can be any of the properties for the class that was selected for this connector.

Add the custom keys as required and map these to the data from the import source.

 ami02 NOTE:

All Key Properties for the selected class as well as any Custom Keys are required fields and must be mapped to continue.

The property displayed in the left column will show all properties of the selected class, along with any extended properties that have been added for the class.

The Data Type in the middle column will show what input data type the property will expect. String (Key) identifies the primary key for the selected class.

The Mapped To value displayed in the right column will show drop-down values for each available column header from the specified source

The Hardware Asset ID should be mapped to the primary key selection you chose in the Asset Management Settings. (Serial Number, Asset Tag, GUID, etc.)

Map all additional properties to the input data that is defined from the Input source.

Any properties that are mapped will be updated or entered as defined.

Any properties that are not mapped will not be updated.

If a Combination Class is selected for the connector there will be additional mapping fields under the Relationship heading.

These can be used to map data from multiple classes together as relationships as required.

Once all mappings are complete, click Next.

8. Connector Workflow Schedule

Some connectors will be run as a once off to import bulk data in to the SCSM database, whereas others might be run on a schedule to keep other data sources up-to-date within the database.

An example of a scheduled data source might be a connector in to a Mobile Device Management (MDM) solution or an accounting or purchase system (for invoices and Purchase Orders).

For connectors that will be only run once, select the option marked This connector will be run manually.

When using this option, a warning message will be displayed to remind administrators that the connector will only run when using the Synchronize Now task within the console.

For a reoccurring schedule, enter the frequency as either daily or as a regular reoccurrence with a set frequency.

Ensure the Connector Enabled option is enabled to all ow the connector to run. This option may help with the administration of the connector at a later date if it needs to be turned off for a period of time for maintenance or fault finding.

When the scheduling information has been entered, click Create.  ami17

9. Manually Running a Connector

Once a connector has been created it will show within the Connectors node in the Administration workspace of the SCSM console. Within this node, administrators are able to see the current status of all connectors, when they were last started and finished and their percentage complete.

Administrators are also able to manually run a connector to either force the synchronization regardless of workflow schedule or to trigger a non-repeating connector.

To manually run a connector:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

Select the Connector to be run and click the Synchronize Now task within the tasks pane.  ami19
If the connector does not have a schedule set (is disabled) then a message will appear informing that the connector is disabled and asking if it should still be run.

Click Yes to run the Synchronization.

The connector workflow will then be scheduled to start at the next opportunity for the workflow engine.

10. Exporting and Importing a Connector

Once a connector has been configured the settings can be exported to allow administrators to copy the connector to a different environment (dev to prod).

To export and import a connector:

Within the environment to export from:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

Select the Connector to be run and click the Export task within the tasks pane.

Save the connector XML file to a path and click Save.

Within the environment to import in to:

On the Connectors node, select Create Connector from the drop down menu.

Select Asset Management Import Connector (Import) from the sub menu.

Browse to the folder containing the exported XML file, select the xml file to import and click OK.

A window will appear to rename the Connector from its original name if required and change the Management Pack that holds the information.

If the connector is importing from a CSV file, an additional field will appear that is used to provide the source location of the CSV file required.

Enter the values needed and click OK.

The connector will be imported and will now appear in the connectors node.

11. Deleting a Connector

If a connector is no longer needed, then it can be removed from the SCSM environment by deleting the connector from the console.

To delete a connector:

Within the environment to export from:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

Click the Delete task from the tasks pane on the right of the screen.

Click OK on the message that appears to confirm the connector to be deleted.

The connector has previously imported data a second message will appear asking if the data that was imported from the connector should be deleted.


Hope this gives you a clear idea of how this app comes together and works for your organization.

Leave a comment if you have any additional questions.


Getting More From ConfigMgr For SCSM

We all love Microsoft’s System Center Configuration Manager and the vast majority of the industry loves it too. As Microsoft have recently announced over 50Million end points are now covered by just the latest current branch build (1610)

The amount of data points that are returned from the ConfigMgr client is huge and can be exceptionally useful when diagnosing issues or tracking down what is deployed in an organization.

However, out of the box, the data is limited to what Microsoft deem necessary. While this is fine for much of the time every now and then there is a requirement to find more or different information to track things that are not in the standard hardware inventory report.

A great example that was asked for recently is Monitors.

Some organizations want to be able to track monitors with their PC’s and therefore their locations etc.

What many people do not realise is that a monitors cable (even VGA) passes very basic information back to the PC. This data can contain a bunch of data that is relevant to the monitor such as:

  • Manufacturer
  • Model
  • Serial Number
  • Etc.

This data is an industry standard called Extended Display Identification Data (EDID). This data is in a consistent format so this allows us to be able to retrieve this data in a consistent way.

Once we retrieve the data we can use it to identify what Monitor is currently plugged in. All we then have to do is get the Configuration Manager client to return the data as part of the standard hardware inventory cycle.

Step 1: Storing the EDID Data Somewhere Locally

This step takes the EDID data and places it in to a location that we can simply retrieve via the ConfigMgr client.

To achieve this, we need to get the client to interrogate the monitor for the EDID information then save the data to an easy to retrieve location, such as the WMI of the local machine.

To do this, we use PowerShell.

Here is the code you will need:
Test this script before you use it in prod. The script is provided as is and is not supported. (The usual drill)

# Reads the 4 bytes following $index from $array then returns them as an integer interpreted in little endian
function Get-LittleEndianInt($array, $index) {

# Create a new temporary array to reverse the endianness in
$temp = @(0) * 4
[Array]::Copy($array, $index, $temp, 0, 4)

# Then convert the byte data to an integer
[System.BitConverter]::ToInt32($temp, 0)

# Creates a new class in WMI to store our data including fields for each of the data points that we can return
function Create-Wmi-Class() {
$newClass = New-Object System.Management.ManagementClass(“root\cimv2”, [String]::Empty, $null);
$newClass[“__CLASS”] = “MonitorDetails”;
$newClass.Qualifiers.Add(“Static”, $true)
$newClass.Properties.Add(“DeviceID”, [System.Management.CimType]::String, $false)
$newClass.Properties[“DeviceID”].Qualifiers.Add(“key”, $true)
$newClass.Properties[“DeviceID”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“ManufacturingYear”, [System.Management.CimType]::UInt32, $false)
$newClass.Properties[“ManufacturingYear”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“ManufacturingWeek”, [System.Management.CimType]::UInt32, $false)
$newClass.Properties[“ManufacturingWeek”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“DiagonalSize”, [System.Management.CimType]::UInt32, $false)
$newClass.Properties[“DiagonalSize”].Qualifiers.Add(“read”, $true)
$newClass.Properties[“DiagonalSize”].Qualifiers.Add(“Description”, “Diagonal size of the monitor in inches”)
$newClass.Properties.Add(“Manufacturer”, [System.Management.CimType]::String, $false)
$newClass.Properties[“Manufacturer”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“Name”, [System.Management.CimType]::String, $false)
$newClass.Properties[“Name”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“SerialNumber”, [System.Management.CimType]::String, $false)
$newClass.Properties[“SerialNumber”].Qualifiers.Add(“read”, $true)

# Check whether we already created our custom WMI class on this PC, if not, create it
[void](Get-WmiObject MonitorDetails -ErrorAction SilentlyContinue -ErrorVariable wmiclasserror)

# If the wmiClassError is returned then assume that the WMI class does not exist yet and try to create a WMI class to hold the Monitor info
# If creating the WMI class fails, exit with error code 1
if ($wmiclasserror) {
try { Create-Wmi-Class }
catch {
“Could not create WMI class”
Exit 1

# Iterate through the monitors in Device Manager
$monitorInfo = @() #Empty array
Get-WmiObject Win32_PnPEntity -Filter “Service=’monitor'” | foreach-object { $k=0 } {
$mi = @{}
$mi.Caption = $_.Caption
$mi.DeviceID = $_.DeviceID

# Then look up its data in the registry
$path = “HKLM:\SYSTEM\CurrentControlSet\Enum\” + $_.DeviceID + “\Device Parameters”
$edid = (Get-ItemProperty $path EDID -ErrorAction SilentlyContinue).EDID

# Some monitors, especially those attached to VMs either don’t have a Device Parameters key or an EDID value. Skip these
if ($edid -ne $null) {

# Collect the information from the EDID array in a hashtable
$mi.Manufacturer += [char](64 + [Int32]($edid[8] / 4))
$mi.Manufacturer += [char](64 + [Int32]($edid[8] % 4) * 8 + [Int32]($edid[9] / 32))
$mi.Manufacturer += [char](64 + [Int32]($edid[9] % 32))
$mi.ManufacturingWeek = $edid[16]
$mi.ManufacturingYear = $edid[17] + 1990
$mi.HorizontalSize = $edid[21]
$mi.VerticalSize = $edid[22]
$mi.DiagonalSize = [Math]::Round([Math]::Sqrt($mi.HorizontalSize*$mi.HorizontalSize + $mi.VerticalSize*$mi.VerticalSize) / 2.54)

# Walk through the four descriptor fields
for ($i = 54; $i -lt 109; $i += 18) {

# Check if one of the descriptor fields is either the serial number or the monitor name
# If yes, extract the 13 bytes that contain the text and append them into a string
if ((Get-LittleEndianInt $edid $i) -eq 0xff) {
for ($j = $i+5; $edid[$j] -ne 10 -and $j -lt $i+18; $j++) { $mi.SerialNumber += [char]$edid[$j] }
if ((Get-LittleEndianInt $edid $i) -eq 0xfc) {
for ($j = $i+5; $edid[$j] -ne 10 -and $j -lt $i+18; $j++) { $mi.Name += [char]$edid[$j] }

# If the horizontal size of this monitor is zero, it’s a purely virtual one (i.e. RDP only) and shouldn’t be stored
if ($mi.HorizontalSize -ne 0) {
$monitorInfo += $mi

# Clear WMI
Get-WmiObject MonitorDetails | Remove-WmiObject

# And store the data in WMI
$monitorInfo | % { $i=0 } {
[void](Set-WmiInstance -Path \\.\root\cimv2:MonitorDetails -Arguments @{DeviceID=$_.DeviceID; ManufacturingYear=$_.ManufacturingYear; `
ManufacturingWeek=$_.ManufacturingWeek; DiagonalSize=$_.DiagonalSize; Manufacturer=$_.Manufacturer; Name=$_.Name; SerialNumber=$_.SerialNumber})

#”Set-WmiInstance -Path \\.\root\cimv2:MonitorDetails -Arguments @{{DeviceID=`”{0}`”; ManufacturingYear={1}; ManufacturingWeek={2}; DiagonalSize={3}; Manufacturer=`”{4}`”; Name=`”{5}`”; SerialNumber=`”{6}`”}}” -f $_.DeviceID, $_.ManufacturingYear, $_.ManufacturingWeek, $_.DiagonalSize, $_.Manufacturer, $_.Name, $_.SerialNumber

The script needs to run on each PC on a regular interval to keep the data up-to-date. This ensures that if a monitor gets added or removed from a PC then the information is updated on a regular basis. Save the PowerShell script to a location that can be used by SCCM as the source location of a package. This location will be referenced as the Source Location for the remainder of this procedure.

Open the System Center 2012 Configuration Manager console  clip_image001
Select the Software Library workspace  clip_image002
Expand the Application Management node and select the Packages node  clip_image003
Select the subfolder where the package will be created, right click and select Create Package from the drop down list  clip_image004
Enter the following information:

Name: Monitor Details Gather

Description: Extract the monitor EDID information from the client and store the data in WMI ready for collection by SCCM

Version: 1.0

Click the checkbox labelled The package contains source files and click Browse

Enter the UNC path to the Source Location folder created earlier in this procedure.

Click OK

Once back on the package screen, click Next

Select Standard Program and click Next  clip_image007
Enter the following information:

Name: Get Monitor Details

Command Line: get-monitor-details.ps1

Run: Normal

Programs can run: Whether or not a user is logged on

Click Next

Leave all settings as default and click Next  clip_image009

Confirm the settings and click Next to create the package

When the package creation is completed, click Close

Within the console, right click on the package and select Distribute Content from the drop down list  clip_imageb001
Click Next  clip_imageb002
Click Add and select Distribution Point from the drop down list  clip_imageb003
Select the distribution points that require the content and click OK  clip_imageb004
Once all distribution points have been added, click Next  clip_imageb005
Confirm all the settings and click Next  clip_imageb006
When the Distribute Content Wizard is completed, click Close  clip_imageb007

Once the package is created we need to deploy it out on to run on a regular schedule on clients. The script does need to be run often as the monitors will move from PC to PC over time. How frequently is up to each organization and what they are trying to achieve.

To setup a deployment:

Within the console, right click on the package and select Deploy from the drop down list  clip_imagec001
On the label collection label click the Browse button  clip_imagec002
Select the collection that the script will be deployed to and click OK.

On the previous screen, click Next

Confirm that the content has been distributed to a distribution point and click Next  clip_imagec004
Select Required as the installation type and click Next  clip_imagec005
On the schedule wizard screen, click New  clip_imagec006
Click the Schedule button  clip_imagec007
Select the start time for when the script will run on the workstations.

Select a custom interval and set this schedule to recur every 1 days.

Click OK.

Click OK  clip_imagec009
Click Next  clip_imagec010
Leave all settings as default and click Next  clip_imagec011
Leave all settings as default and click Next  clip_imagec012
Confirm the settings and click Next to create the package  clip_imagec013
When the Deploy software wizard is completed, click Close  clip_imagec014

Step 2: Retrieve the WMI Data via ConfigMgr

Now that we have the data stored in the WMI we need to get the ConfigMgr client to return the data next time it does a Hardware Inventory of the clients.

To ensure it is possible to read the correct fields within ConfigMgr the WMI class needs to exist on at least one PC that you have access to.

Select a PC to run the script on and execute the PS1 file.

This PC will be used later to query the class that will allow System Center 2012 Configuration Manager to collect inventory from all other workstations.

Select the Administration workspace  clip_imaged001
Select the Client Settings node  clip_imaged002
Select the Default Client Settings item,
a client settings item that affects all workstation clientsRight click and select Properties
Select Hardware Inventory from the settings list  clip_imaged004
Click Set Classes  clip_imaged005
Click Add  clip_imaged006
Click Connect  clip_imaged007
Enter the Computer name that the script was run on earlier in this procedure and click Connect  clip_imaged008
Select the MonitorDetails class from the list and click OK.

If the MonitorDetails class is not there, then the script has not run successfully on the computer you are connecting to. Make sure you test the PowerShell script and repeat if necessary.

Once the class is selected, click OK on the remaining open windows


This process tells the client to retrieve the WMI class that we just created an populated using our PowerShell script. Once this is set, it will not need to be revisited unless the client settings change or are recreated for any reason.

And there we have it.

The PowerShell script will go out and run against clients updating the WMI and as these clients report in their Hardware inventory the monitor details will appear in the resource explorer like any other hardware detail.

For many, this may enough as they will be able to report on the ConfigMgr database and get the results they are after. Others want a more thorough view of Asset Management and may want to pull this information in to their Asset management solution to show these relationships.

In my next blog post, I will go through how to use the Cireson Asset Management Solution to pull in this data, create or update a Hardware Asset item for each monitor and finally how to associate it with the computer it is plugged in to.

An ITIL Change Management Checklist (Best Practices to Avoid Common Pitfalls)

In the last week I’ve been doing a couple of presentations on Change Management and where to start for businesses. This post will be talking about the IT Service Management life-cycle and most importantly delivering services to our end users, or customers, that are successful and have little to no negative impact on business continuity during its deployment and also reduce business risk wherever possible.

This post will be focusing on Change Management and where to start with it, what are best practices and how do we make it easier on ourselves.

To kick off, I think it is important that we have a clear idea of what a change is and why change management is important.

“A change is defined by ITIL as the Addition, Modification or Removal of anything that could have an effect on IT services.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)

Now I would make one slight modification to this statement and replace IT Services with Business Services.

Why should we restrict the amazing work we are doing to just IT?

ITIL also tells us that “Change Management provides value by promptly evaluating and delivering the changes required by the business, and by minimizing the disruption and rework caused by failed changes.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)

Some of the key words that we should all keep a focus on are “Promptly” evaluating, “Minimizing” disruption to end users and also “Minimizing Rework” that is required when a change fails.

There are some common challenges that people have when looking at Change Management for the first time, or looking at improving their Change Management processes. Here are three of the main challenges that I see when discussing Change Management with clients:

Stuck with someone else’s mess

Many people fail before they even start because they are buried in a mess created before they arrived. Either because of a failed attempt to get change management implemented or just a complicated system that has always existed.

And as we know many systems are just maintained because “That’s the way it’s always been done”.

Getting buy in from the entire business is important. Having the business behind a push for better change management will enable you to wipe the slate clean and build something less complex and more targeted to the business.

Not sure where to start

Change management can be a big beast and can get people all bent out of shape knowing they will have to follow some sort of formalized process.

However, as we will see, there is no need for Change management to be as complex as people think it will be.

It’s Too Complex

Yes, this would have to be my personal number one bug bear with some change management processes.

But as we mentioned earlier ITIL tells us the change management should “Provide value by promptly evaluating and delivering the changes…..”

So if a change management process is taking to long or is an arduous process then we know we have it wrong.

Too many fingers in the pie

This is an oversimplification of this point.

What I’m trying to explain here is that many groups or sub groups that we work with do have set procedures that they use and are at varying levels of maturity and quite often these independent groups think they have it right and want to take care of it all themselves.

However, these processes are often independent of each other and can get in each other’s way or “rework” the same information or data several times.

Imagine if you will that ever car manufacturer had their own set of road rules. Individually these rules may work and may be a perfect procedure for that car manufacturer. That’s all well and good until we all start sharing the same road.

Then, we have chaos.

Even though everyone is following a set and tested procedure if we don’t take in to consideration all the other systems that we have within our business then we see conflicting issues and changes that were doomed to fail.

Specifically in IT, as our systems become ever more complex these issues occur on an all to frequent basis.

I’m sure everyone has an example of where a minor change to one system had a catastrophic outcome for some unrelated system that no one new about or had not considered.

Good change management can reduce the amount of time spent on unplanned work but it has to be effective.

Bad change management will just add an administration layer to the firefighting we always do.

This is both a waste of time and does not reduce the amount of unplanned work we have.

From what we have talked about so far there are some basic rules we can stick to that will help guide us to a good Change Management process.

Promptly is the key

If a process takes too long then no one is going to want to follow it. High risk issues are always going to take longer but there is no need to drag our feet where we don’t need to.

Low risk issues should be able to be speedily processed and maybe even automatically approved.

Which leads us to our next point,

Fit for Purpose

There is no need to bother your CAB with basic routine. If the CAB can clearly define what they require for a basic low risk change then make sure your process hits that and move on.

CAB have bigger fish to fry and more risk to deal with.

So why not have a simple process for Low risk changes. One Change Manager to review then do the change. SIMPLE!

How do we make sure that we capture these key points?


Create templates where possible. Inject data we already know so people don’t have to guess at values or (like I am sure we have all seen) people just making up values.

It is more important to be able to get a consistent and complete result than it is to get the perfect result. Consistency allows us to report and see where we are doing well, and where we can improve.

More processes to make all this happen is NOT the solution. Often less is more when it comes to these processes.

We can all think of a change that we SHOULD do but never quite get around to it. How about rebooting a server? Depending on the server this could be low risk, minimal impact, not worth going to CAB over….  But should it be a change?

Remember a change is defined as “…the Addition, Modification or Removal of anything that could have an effect on IT services.”

Well why not have a change process that accepts that a low risk server can be rebooted without CAB approval, just so long as it is recorded?

Why not automate it?!

Of course none of this is any good if we don’t know the risk.

More specifically, Correct Risk.

So what is the best way to assign risk to our IT Services?

This is a big topic that usually takes many committees and sub committees, revisions, arguments, another committee and more arguments.

There is a much simpler way to assign risk to an IT service and you most probably have already done it. D.R.

We classify Disaster Recovery in most organisations. We have had the argument and we have spent the money to make sure those “Business Critical” systems have good quality D.R.

If you are like most organizations I’ve worked for you will have gone through the process of “What do we cover with DR?

And we start by including EVERYTHING.

We then get the quote back from a vendor on what that DR solution would cost and we quickly write off half a dozen services without even thinking.

And again and again we go until we have a DR solution that covers our Business Critical systems.

Guess what? They are High risk.

Some systems we have internal HA solutions for. Maybe SCSM with 2 Management servers.

Not Critical….  We could live off paper and phone calls for a few hours or even days without it….   Let’s say medium risk.

Then we have everything else. Low risk.

Simple. Why over complicate it?

So all the theory in the world is all well and good but what are some real world examples? Well I’m glad you asked.

I wanted to show you a range of items that had a common theme but ranged in complexity and risk. That way I can demonstrate to you the way it can be done in the real world with a real scenario.

What better scenario than our own products.

However, there is no reason that these examples could not be easily translated to ANY system that you may have in your organisation.

Our first example is the Cireson Essentials Apps. Like Advanced Send E-mail, View Builder, and Tier Watcher.

These can be classified as “Low Risk” because only IT Analysts use them and IT Analysts can do their job without them, even though it would take more effort to do the job, they could still work.

Second is the Self Service portal. This effects more than just the IT Analysts and the impact on end users being able to view and update their Work Items and unable to use Self Service. But, all of this can still be done via phone or e-mail, although this will take longer and not be as simple for end users.

Finally, asset management is a high risk change that would happen to our system. Asset Management affects a wide range of systems and impacts SR’s as well as support details that analysts use.

In addition, the upgrade of AM is not a simple task. During the upgrade there are reports, cubes, management packs, DLL’s and PC client installs that are required.

So let’s take a look at what this looks like in the real world.

So when creating a change management process surely there are some simple steps we can follow to get the ball rolling.

Here is what I like to tell people are the 4 key pieces of a successful change management practice.

Less Process

Keeping the process as simple and as prompt as possible. This can be started by creating basic process templates for Low, Medium and high risk changes. There is always going to be exceptions that we have to add a little more process for but wherever possible stick to the standard three basic templates.


The number 1 reason for failure of changes that I’ve ever been involved in is testing. There is nothing like the old “Worked fine in my lab…   “ line.

The amount of rework or “Unplanned” work that stems from an untested change is huge and even just a little testing can catch big issues

Get the right people involved

We are not always experts in what a system is, what is does or how it should work.

How many times has your testing for an application package been to install it and if it installs without an error, it must be good?

What if when an end user logs on the whole thing crashes?

So even getting end users involved in your testing of minor changes can be a huge benefit.

And finally….


So many places I see never have a formal review process.

These are critical for making sure the processes don’t stray from a standardized approach and that we know that all the obvious things are right and the whole thing is not going to blow up when we hit go.

Just reviewing the failures to find what went wrong is not enough.

It is also important that the changes that went right and fed back in to future decisions to avoid the ones that go wrong. This feedback should find it’s way back in to change templates AND base documentation for the systems so we keep all our processes up-to-date.

These don’t have to be long but these reviews can identify ways to make the process simpler, where a template can be created or where the CAB can be cut out of the picture entirely!

One fantastic question I had recently was “How many changes should we temple?
This is a great question as many people think that templating everything they do is a waste of time. This is not the case.
If you have a change that you do on a recurring basis (not a once off) even if it only occurs once every six months or year, (In fact I’d argue especially if it is only once or twice a year) it is worth templating for two main reasons:

  • Does anyone remember the correct process for the change?
    Often a single person is responsible for a change and all the knowledge can be locked up in their head. By templating processes this way, we can ensure that the knowledge is available to everyone so if the original person is no longer available the change can still be a success.
  • Was the process successful last time we ran it and if not, what went wrong so we don’t do it again?
    If you are only doing a change once or twice a year a template is a great way of making sure that lessons learnt are never lost and mistakes are worked out of the process.

A standard approach might include a set of standard activities that are carried across all risk levels, but we just keep adding to the process as we move up the risk profile. A basic template might look something like this:

CR Standardization

The above example is just an example. It is by no way prescriptive guidance that you should follow religiously, but more of a starting point. There is always going to be those changes that require more steps as the change is more complex. What is important is the basics are followed and a standard approach is followed to encompass the key points outlined in this article.

So to sum this all up in one paragraph:

  • Prompt and Simple Process. Make it quick and simple
  • Standardize ALL changes to a simple set of rules and create templates
  • Make sure your changes are fit for purpose. Only bother the CAB when you need to and have the right people involved
  • Simple risk calculation (use disaster recovery plans if you don’t know where to start)
  • Review and document your changes to improve what you do

Runbook is in an Invalid State


A common issue I run in to a lot with SCSM automation is the Following error message:

The Runbook associated with this Runbook activity template <Name of template>, is in an invalid state. Select another Runbook or ensure that the Orchestrator connector is properly configured

The Runbook associated with this Runbook activity template <Name of template>, is in an invalid state. Select another Runbook or ensure that the Orchestrator connector is properly configured

Error message in the SCSM console


This is caused by the Runbook being in an invalid state within SCSM, not within Orchestrator.

To see what I mean, within SCSM, Navigate to the Library workspace and select the Runbooks node.

Invalid Runbook

Invalid Runbook

When a Runbook within SCSM is in an invalid state, it is usually because the input Properties for the Initialize activity within the Runbook itself has been changed since the first sync of the Orchestrator connector and SCSM does not know what to do with the new properties. (or the removal of the old ones)


The solution is fairly straight forward.

Within the SCSM Console, select the Runbook that has a status of “Invalid” and select Delete.
This will delete it from the SCSM Console and not Orchestrator.

Then re-run the Orchestrator Connector:

  1. Select the Administration workspace
  2. Select the Connectors node
  3. Selecting the Orchestrator connector you need to re-run
  4. Click Synchronize Now in the tasks pane

Re-run the Orchestrator Connector

Re-run the Orchestrator Connector

Once the connector has finished it should all be back to normal.