Since 2015 pretty much all of us who use System Center Service Manager (SCSM) have used the Microsoft Exchange Connector v3.1 to capture e-mails coming from end users and turn them in to Incidents. It works well and does what it says on the box…… But wouldn’t it be great if it did some other things?
What if it could merge replies to prevent multiple work items from being created, or work with encrypted e-mail systems, or even use AI to predict the subject and auto search KB articles for the end user?
All this and be open source so we could customise it ourselves?
That would be special…
Well that’s exactly what one member of the Cireson Community did!
Adam Dzyacky took on this challenge and has now created an Open Source, Community driven, PowerShell coded Exchange Connector that not only preserves the functionality of the Microsoft Exchange Connector but adds additional functionality.
Recently I was lucky enough to sit down with the connectors creator, Adam Dzyacky, and ask him a bunch of questions about the product so I thought I’d write a blog post to share with you some questions and answers including what was the genesis of this product, what are its goals, what are its current abilities and how can people use it today…. FOR FREE! 🙂
Question: Where did the idea of this connector come from and what was your thought process behind creating this connector?
Answer: Several years ago when I first got involved with Service Manager and the Exchange Connector I was immediately confronted with a problem – the stock connector only processes a single message type (IPM.Note). As such, any other message type is simply ignored. Out of Office, Meeting requests, Signed/Encrypted messages…all of it.
But hope was not lost because with some PowerShell and SMA, one could create scheduled SMA jobs to pick up what the stock connector missed. It would certainly introduce a new level of administration, but once it’s automated the work is done. I thought to myself;
“Well at least I can curb this with PowerShell so I guess it isn’t that bad.”
But I couldn’t help but shake the feeling that I can’t be the only one who cares about those other message types.
Next, if it wasn’t some new message type I’d have to deal with it was how the connector worked when it came time to process even those basic emails. Employees replying within a current processing loop of the connector, to the same thread of a message would generate new and unique Work Items for every single reply instead of simply appending to a single Action Log for a single Work Item.
Since the connector isn’t real time and instead every runs every X minutes…well a lot can happen between runs of the connector! It’s an unpredictable behaviour that requires the team(s) charged with that initial filtering to do a lot of Work Item micro-management thus detracting from their actual work of Resolving Incidents and fulfilling Service Requests. That’s potentially a lot of duplicate Work Items to close in SCSM and no less to understand to ignore in reporting.
In this case, supplementary PowerShell and SMA job can’t solve this because the Work Items have already been created. The connector would need to be able to understand the concept of an email thread at the source before Work Items are updated.
The above are but the first of many issues I had with the stock connector. It’s not that it isn’t great at what it does, it’s just I wished I could change some of it.
But no matter how much I wished I could change it the Exchange Connector is a sealed, closed source, C# management pack. Even if you could address this at its source, not only would you need an understanding of the C# programming language but you’d also need an in depth understanding of the System Center SDKs.
Question: So what was your plan of attack to fix these issues?
Answer: In February of 2017 I finally had enough of what wasn’t possible and committed my requirements to OneNote.
- Preserve all functionality of the stock connector
- Introduce some kind of new functionality over the stock connector
- Be modular to support new/changing processes
- Be open source
- No programming languages – need something more than just developers understand and could ultimately edit
Question: No programming languages? As an admin I love the thought of that. So what was the plan of attack?
Answer: So from here, the decision was straightforward. Build an Exchange Connector written entirely in PowerShell leveraging the widely used community PowerShell module that is SMLets.
On top of that, host on GitHub so that bugs can be tracked, features requested, and anyone can contribute.
If successful you’d be able to drop the stock Exchange Connector, improve performance on your workflow server (especially if you had multiple connector for multiple inboxes), optionally move the script into an SMA or Azure Automation RunBook, and of course introduce a host of new possibilities as the only limitation to new features would be PowerShell.
As per Tom Hendrick’s comment in the Cireson Community thread;
“Limitation and PowerShell do no often appear in the same sentence.”
Question: How long did it take you to write the initial version?
Answer: In what probably totals about three weeks of actual focused work – I had the first version done.
Question: Being Open Source means that anyone can contribute to it, but allowing people to contribute and finding people to contribute are two different things. Have you been able to garner support from others to help develop this solution?
Answer: Starting April 2017 I shared this with Tom Hendricks, Brian Wiest, Martin Blomgren, and Leigh Kilday who were gracious enough to provide their time to test and provide feedback for the first release published on GitHub later that month.
Question: So what exactly does it do? What are It’s features?
Answer: The connector has all of the regular features of the stock Exchange Connector plus new features that fall in to two categories:
- People who are using SCSM by itself
- People who are using SCSM with Cireson products
Features if you’re just using SCSM
- Change Requests
- Service Request
- Manual Activity
Just throw [announcement] in your next email to Service Manager and as long as your part of a configurable AD group that’s defined an Announcement will get created in SCSM. Need to control the priority? Just add an additional #low or #high. Announcements default to normal priority otherwise. And yes, you can update announcement simple by keeping the [Work Item] in the subject.
Minimum File Attachment Size
No more signature graphics as attachments. Set a minimum like 25kb and your Work Items will get a whole lot cleaner.
Maximum File Attachment Size
Optionally enforce File Attachment Settings as defined in the Administration -> Settings pane of each Work Item type.
File Attachment “Attached By” Relationship
When the connector processes messages, the Sender will be marked as the “Attached By” relationship for attachments. This is useful when multiple parties are updating the same Work Item.
Review Activities without [approved] or [rejected]
Do your end users think someone is actually reading the Service Manager inbox so they respond with questions to RAs? Fret not because now comments that don’t contain a vote will get appended to the Action Log of the highest Parent Work Item
Vote on Behalf of AD Groups
Open up a whole new world of voting possibility!
Schedule Work Items
The Scheduled Start/End times of a Work Item can now be set by sending a Meeting request to Service Manager. No Work Item yet? Just like email, if a Work Item doesn’t exist to update a new one will be created only now those date fields will be set in addition to the Work Items creation.
Digitally Signed/Encrypted Messages
Leveraging the open source MimeKit project the connector can process digitally signed or encrypted emails just like regular mail.
Get the health of your [Distributed Apps] and their current Active Alerts.
Want to keep the notes between analysts? Just throw in a #private in your message to SCSM and it’ll get marked as Private on the Action Log.
No more duplicate Work Items because now when users Reply to an email that does not have a [Work Item] in the subject, Service Manager will identify the email thread they were in and update the one, true, correct Work Item.
Create Related Work Items on Closed Work Items
Sometimes employees send an email about a Closed Incident. Rather than turn a blind eye, a New Related Work item will get opened for them and copy information from the previous Work Item into the new one along with their recent comment.
Configured correctly, you can redirect several inboxes on Exchange to your single Service Manager inbox. On top of this, unique templates can and will still be applied based on the source inbox they were redirected from. Buh bye multiple connectors!
More Default Work Item Types
No reason to limit yourself. The connector can now be configured to created Change Requests or Problems by default. Great for vendors sending maintenance or analysts generating Problems.
Did you battle with classic Exchange Connector dilemma of “What should the default work item type be when people send in emails – Incident or Service Request?” Wouldn’t it be great if Service Manager could just decide whether or not it should create an IR or SR based on the Affected User’s perceived attitude? Thanks to Azure Cognitive Services, emails can now be run through Sentiment Analysis and based on the rating will dynamically create either a Service Request or Incident based on a minimum defined score as configured per organisation.
Features If You Are a Cireson Customer
Suggest Knowledge Articles
You can optionally enable the connector use the body of the email as a search query against one’s respective Cireson HTML KB. Once complete, the connector will send an HTML email back to the Affected User with suggested Knowledge Articles and hyperlinks to them.
Suggest Knowledge Articles
You can optionally enable the connector use the body of the email as a search query against one’s respective Cireson Service Catalog. Once complete, the connector will send an HTML email back to the Affected User with suggested Request Offering and hyperlinks to them.
Send Outlook Meeting
The connector supports the ability to create or update Work Items from Meeting Requests. This introduces a New Work Item task on the Cireson portal so you can further leverage this feature.
Just throw [announcement] in your next email to Service Manager and as long as your part of a configurable AD group that’s defined an Announcement will get created in the Cireson SCSM Portal. Who will see it? Simple – the Distro groups you included on your email message out! Need to control the priority? Just add an additional #low or #high. Announcements default to normal priority otherwise. And yes, you can update announcements simple by keeping the [Work Item] in the subject.
[take] Keyword Restrictions for Support Groups
Maybe you want to put some restrictions on who can [take] things. Leveraging the Cireson Web API this is now possible checking to see if the Sender is part of the Support Group the Work Item is currently assigned to.
Instead of using the entire email body to suggest Knowledge Articles or Request Offerings to the Affected User, Azure Cognitive Services will pick out the keywords of the message and use those words to drive suggestions. This results in more focused searches and faster processing times.
Question: WOW! That’s a lot. What’s next on the planning table and how can others join in the conversation?
Answer: A few that come to mind are things like creating Work Items on behalf of others through the connector, assigning to yourself on Create, and as GitHub community suggested – integrating with the Cireson Portal Watchlist feature. All of these can be found on the repo’s Issue page.
Speaking just for myself I’d say that since day 1 I’ve wanted some kind of AI integration and fortunately Azure Cognitive Services readily provides that through easily consumable APIs. While we have sentiment and keyword analysis in the current version, I think the more interesting topics are things like their using their Speech API to convert voicemails to Work Item descriptions or using LUIS to understand intent to drive specific actions within SCSM. But ultimately, just discussion at this point.
Question: How would someone get involved in contributing to the project if they wanted to?
Answer: All it takes is a GitHub account. After you sign up you can Fork the repository. This, in short, creates a duplicate SMLets Exchange Connector under your own account that you can edit and change how you see fit and submit requests to Merge back into the master repository if you want. Cireson Community member Roland Kind has done this to start building a version that makes use of the stock SCSM cmdlets if you prefer that module instead.
An account also gets you the ability to suggest features, post bugs, and join the conversation directly on the Issues page. Maybe you just want to be notified when there are changes? If you put a Watch on the repo you can get email notifications when changes occur. Or if you just want to show your support you can also Star the repository.
The new PowerShell based Open Source Exchange Connector is nothing short of AMAZING!
Thanks go to Adam Dzyacky and anyone else who has contributed to this solutions for all the hard work and dedication to get this solution up and running.
New features get added regularly and there is a vibrant and energetic group of contributors who keep it updated and supported. (Not sure I could say the same about the MS Exchange Connector offering – Last updated in 2015)
While some organisation may have issues with this solution being Open Source and not officially supported by a vendor, I personally think the benefits far outweigh the possible risks. Considering the time and effort we all spend micro managing the results of the out of the box connector this new solution will shave tens of hours per week in support effort.
As another year gets underway and we look forward to another year of technological breakthroughs and industry changing trends we often have to stop and re-evaluate our investments in some technologies and reaffirm our commitment to others.
2017 saw vast swings in technology with things like a Bitcoin bubble to rival any other bubble in history, amazing advances in Artificial Intelligence, Apple deliberately slowing their phones in an attempt to make us want to buy the “Latest and Greatest” phones and Cyber-attacks where at an all-time high including huge losses of user details across a wide range of companies such as Yahoo, Kmart, Equifax, Imgur, and even Uber.
2018 is shaping up to be even more disruptive as we see early indications of a buddle burst and potential entire collapse of Bitcoin, exciting advancements in mobile phone technology, VR and one of the most impactful security vulnerabilities to ever hit the industry in the form of the Meltdown and Spectre exploits.
So what are the technologies that are worth watching out for and looking in to how it may affect our businesses our industry or even society itself?
Here are my top 5 that I believe will make huge impacts in 2018.
Block Chain (The tech behind Bitcoin)
Bitcoin has been in the news a lot of late for some good reasons and some bad. More importantly than the massive swings in value of Bitcoin is the technology that makes it all work.
Block Chain is a new way of decentralizing the data required to drive many applications meaning that our transaction data is no longer required to be stored and secured by a specific company (Uber, AirBNB, Twitter, Google, FaceBook etc.). Instead, Block Chain databases allows for the authentication of a transaction (Let’s say a driver picking up and dropping off a passenger) with it all being encrypted, open source, highly available and unable to be corrupted without anyone noticing.
This technology does not have to be limited to financial transactions, but can also be used to verify identity of an individual. For example: Australia Post have announced it will be using Block Chain technology within its Digital ID platform.
I think that 2018 will be the watershed year for Block Chain and how it affects the way, we in the IT industry, think about data and trust across a wide range of applications.
AI, Bots and Digital Assistants
We’ve slowly seen the emergence of digital assistants such as Apples Siri, Amazons Echo and Alexa, Googles Assistant and even Microsoft’s Cortana, but these have been more of a novelty than something we rely on in our day to day lives.
As AI technology increases, even with basic pattern recognition improvements and big data mining techniques, we will see more and more applications for these will become more ubiquities and will really start to make an impact on our daily lives.
We are already seeing the emergence of Chat Bots in areas such as banking (Great examples are Wells Fargo and Australians Commonwealth Bank) however, each of these chat bots are specific to their own area of expertise and exposed to a specific data set that they can reply about.
Once we have a way to retrieve all of the required data from all of the companies we interact with, then we are going to see some great leaps ahead in how we interact with companies, consumers and even government agencies.
With access to more machine learning, in 2018 we should start to see proactive skills start to appear in our digital personal assistance that will notify us of suspect banking transactions, when our friends or pizza delivery are arriving, when we are due for a health check or even book all of our flights and accommodation ahead of time to get the best deals.
VR v’s AR v’s MR (Because we need more acronyms in our industry!)
Virtual Reality is awesome!
VR headsets such as the HTC Vive and the Oculas Rift are not new to 2018 but we will see increasing numbers of games and content that are tuned to VR. If you have ever used a VR headset then you will agree that the experience of playing an existing high end game in VR (Such as Fallout 4) is cool, but clunky as the original controls were never built with VR in mind. In 2018 we will see new high end content that is built for VR from the ground up will bring a level of realism to games that will literally be game changing. 🙂
Some tech that you may not have played with is AR or Augmented Reality especially in the form of the Microsoft HoloLens. I had a chance to try this nearly 2 years ago and the ability to see the real world but augment what you are seeing with the real world was revolutionary, but also limiting with its field of view etc.
MR, or Mixed Reality, is the next big thing and Microsoft are the leaders in this space with all the lessons they have learnt from HoloLens.
What is MR? Take all the positives of VR but remove the need for pre-mapping a room with special sensors. This opens up the world to a virtual experience without limitations.
2018 will see more innovation and a faster move towards some sort of augmentation on how we perceive the world. It may start with big bulky headsets but rapidly move to helmets, windscreens and regular old glasses before we start wearing them as contact lenses!!.
If the argument of VR v’s MR ever comes to a head, like the good old days of VHS v Betamax or Blue Ray v HDDVD, consider me squarely in the MR camp.
Being a System Center tragic I can’t predict technology in 2018 if I didn’t include some note about System Center and what I think will be on the horizon for the next 12 months.
System Center Configuration Manager
All of our favourite System Center product would have to be Configuration Manager. This has to be one of the easiest products in the IT industry to predict as we are not only given the opportunity to vote on the features we want using the UserVoice Feedback page but Microsoft even give us the next version ahead of time with the monthly Technical Preview releases.
One thing that is obvious from Microsoft’s direction is that InTune will become more and more integrated in to the product we know and love and make managing of devices outside of our perimeters easier and easier.
System Center Service Manager
Microsoft have announced that 2018 will be the year that Service Manager is going to join the Configuration Manager with a regular cadence of 6 monthly releases including new features by the end of 2018. This is fantastic news for the one System Center application that never seems to get the recognition it deserves.
v1801 has already been released and it adds the first new features we have seen since the release of 2012 and also some much needed security features, such as support for TLS 1.2.
For example, there is now Azure integration with Azure Action Groups via the IT Service Management Connector that allow you to set up rules to create incident work items automatically in Service Manager for alerts generated on Azure and non-Azure resources.
The authoring toolkit has also already been released and can be downloaded here.
There is no news at this stage on if Microsoft will release a Technical Preview of Service Manager or if they will host a UserVoice site for end user feedback….. We can only hope.
I’m frequently asked about SLO’s when I do consulting work and I realised that many people may not full understand how SLO’s work and the key pieces that have to be in place to not only get these to work as we expect but to do it efficiently so they do not adversely impact performance on our SCSM environment.
What is an SLO?
An SLO within ITIL is a contract or agreement negotiated between you as a service provider and your customer(s). An SLA describes the service and specifies your responsibilities that you will deliver to the customer. You might use a single SLA across several services or even customers, depending on your business model.
A simple example of an SLA might be that we agree to resolve a priority 1 rated incident in 4 hours.
A more complicated example might be that we agree to provide a 99.99% up time for a service.
What Components Make Up an SLO within SCSM?
To create an SLO within SCSM we need four components:
- A metric to measure
- A Queue to apply it to
- A calendar that defines our “Work Hours”
- A time set against the metric
Creating a Metric in SCSM
A metric, within SCSM, is defining any two properties that can have time difference between them.
For example: The Creation time and Resolution time of an Incident or Service Request.
The Metric is used as the point of measure for the workflow to use when displaying or reacting to a warning or breach event.
Out of all the SLO’s I’ve seen, the most common two are IR First Contact and IR Resolution.
Creating a Queue in SCSM
Not all SLO’s apply to all Work Items.
To limit what SLO’s apply to what Work Items, we need to group together a bunch of Work Items that we want to apply the SLO to.
Creating a Queue is a way of being about to group together a given type of work item based on a criteria that you choose.
Common examples used for Queues are:
- Priority based queues (P1, P2, P3 etc.)
- Category based queues (Server, Desktop, Network etc.)
The most critical thing to watch when creating Queues is to ensure you select a class that has the minimum number o relationships your required to achieve your goal. Selecting the “Incident (Advanced)” combination class for all Incident based Queues is the leading cause of SCSM slowdowns that I have seen.
Creating a Calendar in SCSM
The calendar is used to ensure that the SLO is only calculated when support staff are at work and not over weekend or overnight. (If you don’t work in a 24×7 organization)
You can have multiple calendars if you have different support groups working different hours, but for most organizations there is a single support schedule that the entire team works to.
Creating an SLO in SCSM
To create an SLO you have to have all of the perquisites created and available.
The SLO is then just a case of selecting the time to set against the metric type and applying it to a given queue.
Within the SLO creation wizard you will be asked for both a warning time and breach time.
Warning time triggers an event at a given time before the SLO breaches allowing you to have an e-mail sent to the relevant parties to give them fair warning that the Work Item needs to be worked on.
Breach time triggers an event at the time of the breach and can be used to notify management or an escalation team if required.
How to (and how not to) Use SLO’s in Day-to-Day Operations?
In this authors opinion, for MOST organizations, SLO’s are not required and provide nothing more than a false sense of security in reports and a great source of anxiety for support staff.
I only advise customers to implement SLO’s if they have strict, contractually binding service levels that they must achieve under penalty of contract breach or financial fine.
If your organization wishes to use the SLO’s purely as a reporting measure after the fact, then I suggest you use some advance reporting features to tease this information out of the data after the fact rather than placing the stress of the SLO clock on the support staff.
In a future post I will also offer an opinion on why I believe SLO’s for most organizations are terrible and should be killed with fire…… But that’s another post 😉
After working with SCSM for 6 years now, I thought that there was pretty much no new surprises left for me within this product that, lets face it, gets new features about as often as politicians do something right.
So it was with much celebration and rejoicing that I was informed of a hidden trick with in the SCSM Console that we all love to hate.
A good friend and fellow SCSM tragic Shayne Ray contacted me today to share what he found.
While doing some work jumping from the SCSM Console to the Cireson Service Manager portal, Shayne hit Ctrl+F5 to refresh the browser however, the focus at the time was on the SCSM console and he found something remarkable. A quick search around the interwebs finds a few mentions of it from others but nothing official from Microsoft, so I thought I’d do a quick write up of it all.
While in the console, any location, if the analyst hits any of the following combination of keys, the following actions are invoked:
- Ctrl+F1 – Opens a new default Incident form
- Ctrl+F2 – Opens a new Incident from a template
- Ctrl+F3 – Opens a new Request Offering from a template
- Ctrl+F4 – Opens a new Service Request from a template
- Ctrl+F5 – Opens a new Change Request from a template
- Ctrl+T – Hides or shows Tasks pane
- Ctrl+F – Opens the Advanced Search window
- Ctrl+D – Hides or Shows the Details Pane
- Ctrl+1 – Selects the Administration Workspace
- Ctrl+2 – Selects the Library Workspace
- Ctrl+3 – Selects the Work Items Workspace
- Ctrl+4 – Selects the Configuration Items Workspace
- Ctrl+5 – Selects the Data Warehouse Workspace
- Ctrl+6 – Selects the Reporting Workspace
- Alt+F1 – Hides or Shows the Navigation pane
You learn something new every day! 🙂
The question of tracking Operating Systems within the Cireson Asset Management solution came up the other day and I thought I’d put together a quick blog post to cover off why we would do this and more importantly how.
Why Track OS Versions in Asset Management?
First off, I think it is important to ask yourself why you would want to track Operating Systems within your organisation as it might not give you any useful metrics or data that would be useful in any way to us.
For example: If your organisation has an Enterprise Agreement with Microsoft that covers Windows for all of your PC’s then why do we need to report on it? If we know for sure that we are covered regardless of what version of the OS is used, then there is no useful reports that we can gain about licensing of OS’s.
However, we could get some reports about how our upgrades are going or if a particular threat is seen for a specific OS we could quickly report on what our exposure would be.
So the first thing that you really need to do is determine if it is worth tracking Operating Systems before investing time and effort in to setting these up.
How to Track OS Versions in Asset Management
If we have decided to track OS versions then we need to make sure we cover all OS’s that we want to track by creating Software Assets for each of the branches that we want to track.
For Example: If you are wanting to track just major versions (Windows 7, 8, 10) then it is possible to create a Software Asset for each of these without needing to go any lower level.
However, if you are trying to ensure workstations are up-to-date, then you will have to create a software asset for each SKU of Windows OS (e.g. Windows 10 Home, Windows 10 Enterprise)
Once all individual OS’s are tracked then I would also suggest creating two Software asset called “All Windows Desktop OS’s” and “All Windows Server OS’s”. These will have bundle rules for all of the OS’s so you can track licensing if you have a limited number of OS Licenses.
Below is a list of OS’s that could be tracked, but it would be up to the individual as to which ones to use.
|Microsoft Windows Server 2003 Enterprise Edition R2|
|Microsoft Windows Server 2003 Standard Edition|
|Microsoft Windows Server 2003 Standard Edition R2|
|Microsoft Windows Server 2003 Web Edition|
|Microsoft Windows Server 2008 Enterprise|
|Microsoft Windows Server 2008 R2 Enterprise|
|Microsoft Windows Server 2008 R2 Standard|
|Microsoft Windows Server 2008 Standard|
|Microsoft Windows Server 2012 Datacenter|
|Microsoft Windows Server 2012 R2 Datacenter|
|Microsoft Windows Server 2012 R2 Standard|
|Microsoft Windows Server 2012 Standard|
|Windows Server 2016 Datacenter|
|Windows Server 2016 Standard|
|Microsoft Windows 10 Enterprise|
|Microsoft Windows 10 Pro|
|Microsoft Windows 7 Enterprise|
|Microsoft Windows 7 Professional|
|Microsoft Windows 7 Ultimate|
|Windows 7 Enterprise|
|Windows 7 Professional|
|Windows 7 Ultimate|
|Microsoft Windows 8 Enterprise|
|Microsoft Windows 8 Professional|
|Microsoft Windows 8.1 Enterprise|
|Microsoft Windows 8.1 Professional|
|Microsoft Windows Vista|
|Windows XP Professional|
How to Enter OS Versions in Asset Management
Now all you have to do is enter these in the Cireson Asset Management and we are done right?
Not so fast.
We have a few options to play with here including an option that is “This is an OS”. Seems fairly obvious that we would select this right?
Not so much.
This option looks in a separate location of the ConfigMgr data instead of the Add or Remove Programs list, But the Windows OS is also recorded in the Add or Remove Programs list and can often have more detail, so it is better not to use this option.
Entering Software Assets one at a time can be a challenge and take a lot of time, so to make it easier, here is an Excel file filled with all the information you need to make this happen by importing via Cireson Asset Import, or Cireson Asset Excel.
In the last week I’ve been doing a couple of presentations on Change Management and where to start for businesses. This post will be talking about the IT Service Management life-cycle and most importantly delivering services to our end users, or customers, that are successful and have little to no negative impact on business continuity during its deployment and also reduce business risk wherever possible.
This post will be focusing on Change Management and where to start with it, what are best practices and how do we make it easier on ourselves.
To kick off, I think it is important that we have a clear idea of what a change is and why change management is important.
“A change is defined by ITIL as the Addition, Modification or Removal of anything that could have an effect on IT services.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)
Now I would make one slight modification to this statement and replace IT Services with Business Services.
Why should we restrict the amazing work we are doing to just IT?
ITIL also tells us that “Change Management provides value by promptly evaluating and delivering the changes required by the business, and by minimizing the disruption and rework caused by failed changes.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)
Some of the key words that we should all keep a focus on are “Promptly” evaluating, “Minimizing” disruption to end users and also “Minimizing Rework” that is required when a change fails.
There are some common challenges that people have when looking at Change Management for the first time, or looking at improving their Change Management processes. Here are three of the main challenges that I see when discussing Change Management with clients:
Stuck with someone else’s mess
Many people fail before they even start because they are buried in a mess created before they arrived. Either because of a failed attempt to get change management implemented or just a complicated system that has always existed.
And as we know many systems are just maintained because “That’s the way it’s always been done”.
Getting buy in from the entire business is important. Having the business behind a push for better change management will enable you to wipe the slate clean and build something less complex and more targeted to the business.
Not sure where to start
Change management can be a big beast and can get people all bent out of shape knowing they will have to follow some sort of formalized process.
However, as we will see, there is no need for Change management to be as complex as people think it will be.
It’s Too Complex
Yes, this would have to be my personal number one bug bear with some change management processes.
But as we mentioned earlier ITIL tells us the change management should “Provide value by promptly evaluating and delivering the changes…..”
So if a change management process is taking to long or is an arduous process then we know we have it wrong.
Too many fingers in the pie
This is an oversimplification of this point.
What I’m trying to explain here is that many groups or sub groups that we work with do have set procedures that they use and are at varying levels of maturity and quite often these independent groups think they have it right and want to take care of it all themselves.
However, these processes are often independent of each other and can get in each other’s way or “rework” the same information or data several times.
Imagine if you will that ever car manufacturer had their own set of road rules. Individually these rules may work and may be a perfect procedure for that car manufacturer. That’s all well and good until we all start sharing the same road.
Then, we have chaos.
Even though everyone is following a set and tested procedure if we don’t take in to consideration all the other systems that we have within our business then we see conflicting issues and changes that were doomed to fail.
Specifically in IT, as our systems become ever more complex these issues occur on an all to frequent basis.
I’m sure everyone has an example of where a minor change to one system had a catastrophic outcome for some unrelated system that no one new about or had not considered.
Good change management can reduce the amount of time spent on unplanned work but it has to be effective.
Bad change management will just add an administration layer to the firefighting we always do.
This is both a waste of time and does not reduce the amount of unplanned work we have.
From what we have talked about so far there are some basic rules we can stick to that will help guide us to a good Change Management process.
Promptly is the key
If a process takes too long then no one is going to want to follow it. High risk issues are always going to take longer but there is no need to drag our feet where we don’t need to.
Low risk issues should be able to be speedily processed and maybe even automatically approved.
Which leads us to our next point,
Fit for Purpose
There is no need to bother your CAB with basic routine. If the CAB can clearly define what they require for a basic low risk change then make sure your process hits that and move on.
CAB have bigger fish to fry and more risk to deal with.
So why not have a simple process for Low risk changes. One Change Manager to review then do the change. SIMPLE!
How do we make sure that we capture these key points?
Create templates where possible. Inject data we already know so people don’t have to guess at values or (like I am sure we have all seen) people just making up values.
It is more important to be able to get a consistent and complete result than it is to get the perfect result. Consistency allows us to report and see where we are doing well, and where we can improve.
More processes to make all this happen is NOT the solution. Often less is more when it comes to these processes.
We can all think of a change that we SHOULD do but never quite get around to it. How about rebooting a server? Depending on the server this could be low risk, minimal impact, not worth going to CAB over…. But should it be a change?
Remember a change is defined as “…the Addition, Modification or Removal of anything that could have an effect on IT services.”
Well why not have a change process that accepts that a low risk server can be rebooted without CAB approval, just so long as it is recorded?
Why not automate it?!
Of course none of this is any good if we don’t know the risk.
More specifically, Correct Risk.
So what is the best way to assign risk to our IT Services?
This is a big topic that usually takes many committees and sub committees, revisions, arguments, another committee and more arguments.
There is a much simpler way to assign risk to an IT service and you most probably have already done it. D.R.
We classify Disaster Recovery in most organisations. We have had the argument and we have spent the money to make sure those “Business Critical” systems have good quality D.R.
If you are like most organizations I’ve worked for you will have gone through the process of “What do we cover with DR?”
And we start by including EVERYTHING.
We then get the quote back from a vendor on what that DR solution would cost and we quickly write off half a dozen services without even thinking.
And again and again we go until we have a DR solution that covers our Business Critical systems.
Guess what? They are High risk.
Some systems we have internal HA solutions for. Maybe SCSM with 2 Management servers.
Not Critical…. We could live off paper and phone calls for a few hours or even days without it…. Let’s say medium risk.
Then we have everything else. Low risk.
Simple. Why over complicate it?
So all the theory in the world is all well and good but what are some real world examples? Well I’m glad you asked.
I wanted to show you a range of items that had a common theme but ranged in complexity and risk. That way I can demonstrate to you the way it can be done in the real world with a real scenario.
What better scenario than our own products.
However, there is no reason that these examples could not be easily translated to ANY system that you may have in your organisation.
Our first example is the Cireson Essentials Apps. Like Advanced Send E-mail, View Builder, and Tier Watcher.
These can be classified as “Low Risk” because only IT Analysts use them and IT Analysts can do their job without them, even though it would take more effort to do the job, they could still work.
Second is the Self Service portal. This effects more than just the IT Analysts and the impact on end users being able to view and update their Work Items and unable to use Self Service. But, all of this can still be done via phone or e-mail, although this will take longer and not be as simple for end users.
Finally, asset management is a high risk change that would happen to our system. Asset Management affects a wide range of systems and impacts SR’s as well as support details that analysts use.
In addition, the upgrade of AM is not a simple task. During the upgrade there are reports, cubes, management packs, DLL’s and PC client installs that are required.
So let’s take a look at what this looks like in the real world.
So when creating a change management process surely there are some simple steps we can follow to get the ball rolling.
Here is what I like to tell people are the 4 key pieces of a successful change management practice.
Keeping the process as simple and as prompt as possible. This can be started by creating basic process templates for Low, Medium and high risk changes. There is always going to be exceptions that we have to add a little more process for but wherever possible stick to the standard three basic templates.
The number 1 reason for failure of changes that I’ve ever been involved in is testing. There is nothing like the old “Worked fine in my lab… “ line.
The amount of rework or “Unplanned” work that stems from an untested change is huge and even just a little testing can catch big issues
Get the right people involved
We are not always experts in what a system is, what is does or how it should work.
How many times has your testing for an application package been to install it and if it installs without an error, it must be good?
What if when an end user logs on the whole thing crashes?
So even getting end users involved in your testing of minor changes can be a huge benefit.
So many places I see never have a formal review process.
These are critical for making sure the processes don’t stray from a standardized approach and that we know that all the obvious things are right and the whole thing is not going to blow up when we hit go.
Just reviewing the failures to find what went wrong is not enough.
It is also important that the changes that went right and fed back in to future decisions to avoid the ones that go wrong. This feedback should find it’s way back in to change templates AND base documentation for the systems so we keep all our processes up-to-date.
These don’t have to be long but these reviews can identify ways to make the process simpler, where a template can be created or where the CAB can be cut out of the picture entirely!
One fantastic question I had recently was “How many changes should we temple?”
This is a great question as many people think that templating everything they do is a waste of time. This is not the case.
If you have a change that you do on a recurring basis (not a once off) even if it only occurs once every six months or year, (In fact I’d argue especially if it is only once or twice a year) it is worth templating for two main reasons:
- Does anyone remember the correct process for the change?
Often a single person is responsible for a change and all the knowledge can be locked up in their head. By templating processes this way, we can ensure that the knowledge is available to everyone so if the original person is no longer available the change can still be a success.
- Was the process successful last time we ran it and if not, what went wrong so we don’t do it again?
If you are only doing a change once or twice a year a template is a great way of making sure that lessons learnt are never lost and mistakes are worked out of the process.
A standard approach might include a set of standard activities that are carried across all risk levels, but we just keep adding to the process as we move up the risk profile. A basic template might look something like this:
The above example is just an example. It is by no way prescriptive guidance that you should follow religiously, but more of a starting point. There is always going to be those changes that require more steps as the change is more complex. What is important is the basics are followed and a standard approach is followed to encompass the key points outlined in this article.
So to sum this all up in one paragraph:
- Prompt and Simple Process. Make it quick and simple
- Standardize ALL changes to a simple set of rules and create templates
- Make sure your changes are fit for purpose. Only bother the CAB when you need to and have the right people involved
- Simple risk calculation (use disaster recovery plans if you don’t know where to start)
- TEST, TEST and RETEST!
- Review and document your changes to improve what you do