As another year gets underway and we look forward to another year of technological breakthroughs and industry changing trends we often have to stop and re-evaluate our investments in some technologies and reaffirm our commitment to others.
2017 saw vast swings in technology with things like a Bitcoin bubble to rival any other bubble in history, amazing advances in Artificial Intelligence, Apple deliberately slowing their phones in an attempt to make us want to buy the “Latest and Greatest” phones and Cyber-attacks where at an all-time high including huge losses of user details across a wide range of companies such as Yahoo, Kmart, Equifax, Imgur, and even Uber.
2018 is shaping up to be even more disruptive as we see early indications of a buddle burst and potential entire collapse of Bitcoin, exciting advancements in mobile phone technology, VR and one of the most impactful security vulnerabilities to ever hit the industry in the form of the Meltdown and Spectre exploits.
So what are the technologies that are worth watching out for and looking in to how it may affect our businesses our industry or even society itself?
Here are my top 5 that I believe will make huge impacts in 2018.
Block Chain (The tech behind Bitcoin)
Bitcoin has been in the news a lot of late for some good reasons and some bad. More importantly than the massive swings in value of Bitcoin is the technology that makes it all work.
Block Chain is a new way of decentralizing the data required to drive many applications meaning that our transaction data is no longer required to be stored and secured by a specific company (Uber, AirBNB, Twitter, Google, FaceBook etc.). Instead, Block Chain databases allows for the authentication of a transaction (Let’s say a driver picking up and dropping off a passenger) with it all being encrypted, open source, highly available and unable to be corrupted without anyone noticing.
This technology does not have to be limited to financial transactions, but can also be used to verify identity of an individual. For example: Australia Post have announced it will be using Block Chain technology within its Digital ID platform.
I think that 2018 will be the watershed year for Block Chain and how it affects the way, we in the IT industry, think about data and trust across a wide range of applications.
AI, Bots and Digital Assistants
We’ve slowly seen the emergence of digital assistants such as Apples Siri, Amazons Echo and Alexa, Googles Assistant and even Microsoft’s Cortana, but these have been more of a novelty than something we rely on in our day to day lives.
As AI technology increases, even with basic pattern recognition improvements and big data mining techniques, we will see more and more applications for these will become more ubiquities and will really start to make an impact on our daily lives.
We are already seeing the emergence of Chat Bots in areas such as banking (Great examples are Wells Fargo and Australians Commonwealth Bank) however, each of these chat bots are specific to their own area of expertise and exposed to a specific data set that they can reply about.
Once we have a way to retrieve all of the required data from all of the companies we interact with, then we are going to see some great leaps ahead in how we interact with companies, consumers and even government agencies.
With access to more machine learning, in 2018 we should start to see proactive skills start to appear in our digital personal assistance that will notify us of suspect banking transactions, when our friends or pizza delivery are arriving, when we are due for a health check or even book all of our flights and accommodation ahead of time to get the best deals.
VR v’s AR v’s MR (Because we need more acronyms in our industry!)
Virtual Reality is awesome!
VR headsets such as the HTC Vive and the Oculas Rift are not new to 2018 but we will see increasing numbers of games and content that are tuned to VR. If you have ever used a VR headset then you will agree that the experience of playing an existing high end game in VR (Such as Fallout 4) is cool, but clunky as the original controls were never built with VR in mind. In 2018 we will see new high end content that is built for VR from the ground up will bring a level of realism to games that will literally be game changing. 🙂
Some tech that you may not have played with is AR or Augmented Reality especially in the form of the Microsoft HoloLens. I had a chance to try this nearly 2 years ago and the ability to see the real world but augment what you are seeing with the real world was revolutionary, but also limiting with its field of view etc.
MR, or Mixed Reality, is the next big thing and Microsoft are the leaders in this space with all the lessons they have learnt from HoloLens.
What is MR? Take all the positives of VR but remove the need for pre-mapping a room with special sensors. This opens up the world to a virtual experience without limitations.
2018 will see more innovation and a faster move towards some sort of augmentation on how we perceive the world. It may start with big bulky headsets but rapidly move to helmets, windscreens and regular old glasses before we start wearing them as contact lenses!!.
If the argument of VR v’s MR ever comes to a head, like the good old days of VHS v Betamax or Blue Ray v HDDVD, consider me squarely in the MR camp.
Being a System Center tragic I can’t predict technology in 2018 if I didn’t include some note about System Center and what I think will be on the horizon for the next 12 months.
System Center Configuration Manager
All of our favourite System Center product would have to be Configuration Manager. This has to be one of the easiest products in the IT industry to predict as we are not only given the opportunity to vote on the features we want using the UserVoice Feedback page but Microsoft even give us the next version ahead of time with the monthly Technical Preview releases.
One thing that is obvious from Microsoft’s direction is that InTune will become more and more integrated in to the product we know and love and make managing of devices outside of our perimeters easier and easier.
System Center Service Manager
Microsoft have announced that 2018 will be the year that Service Manager is going to join the Configuration Manager with a regular cadence of 6 monthly releases including new features by the end of 2018. This is fantastic news for the one System Center application that never seems to get the recognition it deserves.
v1801 has already been released and it adds the first new features we have seen since the release of 2012 and also some much needed security features, such as support for TLS 1.2.
For example, there is now Azure integration with Azure Action Groups via the IT Service Management Connector that allow you to set up rules to create incident work items automatically in Service Manager for alerts generated on Azure and non-Azure resources.
The authoring toolkit has also already been released and can be downloaded here.
There is no news at this stage on if Microsoft will release a Technical Preview of Service Manager or if they will host a UserVoice site for end user feedback….. We can only hope.
I had a partner call me the other day and ask if Cireson had a “Hardening Guide” for our SCSM Self-Service and Analyst portals.
This is not a frequent request as it is usually only government or Defence industries that lock down their system to this extent. So it was no surprise that we had never been asked this question before. After much back and forth we were able to put together a hardening guide for our portal and I thought I would share with you all what that looks like and how to achieve it for the rare occasion that this level of security is required.
Some Basic IIS Hardening Details
Within IIS it is possible to restrict the type of file extensions that can be executed within IIS and also what “Verbs” (Core IIS Code commands) that are allowed to be called.
This reduces the exposure of what type of code can be executed and therefore reduces the ability of an attacker to execute malicious code. It is never possible to remove all possible attack surfaces from any internet server as it must execute some code or the web page would never be rendered! Instead, hardening IIS is about just reducing the types of code that can be executed so we are only concerned with what we need and not with surplus to requirement code types.
IIS allows us to do this by restricting the file extensions of the type of code we want to allow.
This is done in the “Request Filtering” section of our website within IIS.
This allows us to filter by file extension, URL, Verbs, headers and several more.
For the purpose of this article we are going to be very generic and only allow specific extensions to be run. In a higher security model it may be required to block anything outside of very specific file names, dates, URL’s etc. but in my opinion, if you need to lock down your web server that far, then it shouldn’t be on the web. 🙂
The file Extension tab shows a bunch of pre-defined file extensions and if they are allowed or blocked. One other setting that is not shown on the main screen is the File Name Extension settings.
This has some generic rules like “Allow unlisted file extensions” which is turned on by default. This basically says, if the file extension has not been specifically blocked then let it run.
You can see where this can be a bad thing….
Basic Hardening Rules
The rules can be administered from the IIS GUI or directly from the configuration files within the web page.
Using the GUI, our first requirement is to disable unlisted file extensions from running. This is as simple as unchecking the checkbox within the “Edit Request Filtering Settings” screen inside the IIS website we are editing.
After this, we need to add the following list of extensions as allowed extensions.
Yes, there is an extension of just .
This is to allow pages without any extension whatsoever to run. This is common as the IIS server will render the code in back ground and the present the page with no extension. NOTE: This is not the same as *.* that will allow ALL extensions to run, this simply allows pages with no extension to be shown.
All these settings are stored within the Web.config file on the file system and that gives advanced admins a faster way to do this than via the GUI.
Using the Web.config file, open the file in a XML editor of choice (Notepad or Notepad++ for example) and search for the <Security> section with the file.
Replace the default settings with the following section.
<requestLimits maxAllowedContentLength=”1073741824″ />
<add fileExtension=”.” allowed=”true” />
<add fileExtension=”.js” allowed=”true” />
<add fileExtension=”.svg” allowed=”true” />
<add fileExtension=”.css” allowed=”true” />
<add fileExtension=”.ttf” allowed=”true” />
<add fileExtension=”.png” allowed=”true” />
<add fileExtension=”.gif” allowed=”true” />
<add fileExtension=”.woff” allowed=”true” />
<add fileExtension=”.html” allowed=”true” />
And that’s it.
So if you ever come across the requirement to “Harden” your web pages, this should help you.
In the last week I’ve been doing a couple of presentations on Change Management and where to start for businesses. This post will be talking about the IT Service Management life-cycle and most importantly delivering services to our end users, or customers, that are successful and have little to no negative impact on business continuity during its deployment and also reduce business risk wherever possible.
This post will be focusing on Change Management and where to start with it, what are best practices and how do we make it easier on ourselves.
To kick off, I think it is important that we have a clear idea of what a change is and why change management is important.
“A change is defined by ITIL as the Addition, Modification or Removal of anything that could have an effect on IT services.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)
Now I would make one slight modification to this statement and replace IT Services with Business Services.
Why should we restrict the amazing work we are doing to just IT?
ITIL also tells us that “Change Management provides value by promptly evaluating and delivering the changes required by the business, and by minimizing the disruption and rework caused by failed changes.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)
Some of the key words that we should all keep a focus on are “Promptly” evaluating, “Minimizing” disruption to end users and also “Minimizing Rework” that is required when a change fails.
There are some common challenges that people have when looking at Change Management for the first time, or looking at improving their Change Management processes. Here are three of the main challenges that I see when discussing Change Management with clients:
Stuck with someone else’s mess
Many people fail before they even start because they are buried in a mess created before they arrived. Either because of a failed attempt to get change management implemented or just a complicated system that has always existed.
And as we know many systems are just maintained because “That’s the way it’s always been done”.
Getting buy in from the entire business is important. Having the business behind a push for better change management will enable you to wipe the slate clean and build something less complex and more targeted to the business.
Not sure where to start
Change management can be a big beast and can get people all bent out of shape knowing they will have to follow some sort of formalized process.
However, as we will see, there is no need for Change management to be as complex as people think it will be.
It’s Too Complex
Yes, this would have to be my personal number one bug bear with some change management processes.
But as we mentioned earlier ITIL tells us the change management should “Provide value by promptly evaluating and delivering the changes…..”
So if a change management process is taking to long or is an arduous process then we know we have it wrong.
Too many fingers in the pie
This is an oversimplification of this point.
What I’m trying to explain here is that many groups or sub groups that we work with do have set procedures that they use and are at varying levels of maturity and quite often these independent groups think they have it right and want to take care of it all themselves.
However, these processes are often independent of each other and can get in each other’s way or “rework” the same information or data several times.
Imagine if you will that ever car manufacturer had their own set of road rules. Individually these rules may work and may be a perfect procedure for that car manufacturer. That’s all well and good until we all start sharing the same road.
Then, we have chaos.
Even though everyone is following a set and tested procedure if we don’t take in to consideration all the other systems that we have within our business then we see conflicting issues and changes that were doomed to fail.
Specifically in IT, as our systems become ever more complex these issues occur on an all to frequent basis.
I’m sure everyone has an example of where a minor change to one system had a catastrophic outcome for some unrelated system that no one new about or had not considered.
Good change management can reduce the amount of time spent on unplanned work but it has to be effective.
Bad change management will just add an administration layer to the firefighting we always do.
This is both a waste of time and does not reduce the amount of unplanned work we have.
From what we have talked about so far there are some basic rules we can stick to that will help guide us to a good Change Management process.
Promptly is the key
If a process takes too long then no one is going to want to follow it. High risk issues are always going to take longer but there is no need to drag our feet where we don’t need to.
Low risk issues should be able to be speedily processed and maybe even automatically approved.
Which leads us to our next point,
Fit for Purpose
There is no need to bother your CAB with basic routine. If the CAB can clearly define what they require for a basic low risk change then make sure your process hits that and move on.
CAB have bigger fish to fry and more risk to deal with.
So why not have a simple process for Low risk changes. One Change Manager to review then do the change. SIMPLE!
How do we make sure that we capture these key points?
Create templates where possible. Inject data we already know so people don’t have to guess at values or (like I am sure we have all seen) people just making up values.
It is more important to be able to get a consistent and complete result than it is to get the perfect result. Consistency allows us to report and see where we are doing well, and where we can improve.
More processes to make all this happen is NOT the solution. Often less is more when it comes to these processes.
We can all think of a change that we SHOULD do but never quite get around to it. How about rebooting a server? Depending on the server this could be low risk, minimal impact, not worth going to CAB over…. But should it be a change?
Remember a change is defined as “…the Addition, Modification or Removal of anything that could have an effect on IT services.”
Well why not have a change process that accepts that a low risk server can be rebooted without CAB approval, just so long as it is recorded?
Why not automate it?!
Of course none of this is any good if we don’t know the risk.
More specifically, Correct Risk.
So what is the best way to assign risk to our IT Services?
This is a big topic that usually takes many committees and sub committees, revisions, arguments, another committee and more arguments.
There is a much simpler way to assign risk to an IT service and you most probably have already done it. D.R.
We classify Disaster Recovery in most organisations. We have had the argument and we have spent the money to make sure those “Business Critical” systems have good quality D.R.
If you are like most organizations I’ve worked for you will have gone through the process of “What do we cover with DR?”
And we start by including EVERYTHING.
We then get the quote back from a vendor on what that DR solution would cost and we quickly write off half a dozen services without even thinking.
And again and again we go until we have a DR solution that covers our Business Critical systems.
Guess what? They are High risk.
Some systems we have internal HA solutions for. Maybe SCSM with 2 Management servers.
Not Critical…. We could live off paper and phone calls for a few hours or even days without it…. Let’s say medium risk.
Then we have everything else. Low risk.
Simple. Why over complicate it?
So all the theory in the world is all well and good but what are some real world examples? Well I’m glad you asked.
I wanted to show you a range of items that had a common theme but ranged in complexity and risk. That way I can demonstrate to you the way it can be done in the real world with a real scenario.
What better scenario than our own products.
However, there is no reason that these examples could not be easily translated to ANY system that you may have in your organisation.
Our first example is the Cireson Essentials Apps. Like Advanced Send E-mail, View Builder, and Tier Watcher.
These can be classified as “Low Risk” because only IT Analysts use them and IT Analysts can do their job without them, even though it would take more effort to do the job, they could still work.
Second is the Self Service portal. This effects more than just the IT Analysts and the impact on end users being able to view and update their Work Items and unable to use Self Service. But, all of this can still be done via phone or e-mail, although this will take longer and not be as simple for end users.
Finally, asset management is a high risk change that would happen to our system. Asset Management affects a wide range of systems and impacts SR’s as well as support details that analysts use.
In addition, the upgrade of AM is not a simple task. During the upgrade there are reports, cubes, management packs, DLL’s and PC client installs that are required.
So let’s take a look at what this looks like in the real world.
So when creating a change management process surely there are some simple steps we can follow to get the ball rolling.
Here is what I like to tell people are the 4 key pieces of a successful change management practice.
Keeping the process as simple and as prompt as possible. This can be started by creating basic process templates for Low, Medium and high risk changes. There is always going to be exceptions that we have to add a little more process for but wherever possible stick to the standard three basic templates.
The number 1 reason for failure of changes that I’ve ever been involved in is testing. There is nothing like the old “Worked fine in my lab… “ line.
The amount of rework or “Unplanned” work that stems from an untested change is huge and even just a little testing can catch big issues
Get the right people involved
We are not always experts in what a system is, what is does or how it should work.
How many times has your testing for an application package been to install it and if it installs without an error, it must be good?
What if when an end user logs on the whole thing crashes?
So even getting end users involved in your testing of minor changes can be a huge benefit.
So many places I see never have a formal review process.
These are critical for making sure the processes don’t stray from a standardized approach and that we know that all the obvious things are right and the whole thing is not going to blow up when we hit go.
Just reviewing the failures to find what went wrong is not enough.
It is also important that the changes that went right and fed back in to future decisions to avoid the ones that go wrong. This feedback should find it’s way back in to change templates AND base documentation for the systems so we keep all our processes up-to-date.
These don’t have to be long but these reviews can identify ways to make the process simpler, where a template can be created or where the CAB can be cut out of the picture entirely!
One fantastic question I had recently was “How many changes should we temple?”
This is a great question as many people think that templating everything they do is a waste of time. This is not the case.
If you have a change that you do on a recurring basis (not a once off) even if it only occurs once every six months or year, (In fact I’d argue especially if it is only once or twice a year) it is worth templating for two main reasons:
- Does anyone remember the correct process for the change?
Often a single person is responsible for a change and all the knowledge can be locked up in their head. By templating processes this way, we can ensure that the knowledge is available to everyone so if the original person is no longer available the change can still be a success.
- Was the process successful last time we ran it and if not, what went wrong so we don’t do it again?
If you are only doing a change once or twice a year a template is a great way of making sure that lessons learnt are never lost and mistakes are worked out of the process.
A standard approach might include a set of standard activities that are carried across all risk levels, but we just keep adding to the process as we move up the risk profile. A basic template might look something like this:
The above example is just an example. It is by no way prescriptive guidance that you should follow religiously, but more of a starting point. There is always going to be those changes that require more steps as the change is more complex. What is important is the basics are followed and a standard approach is followed to encompass the key points outlined in this article.
So to sum this all up in one paragraph:
- Prompt and Simple Process. Make it quick and simple
- Standardize ALL changes to a simple set of rules and create templates
- Make sure your changes are fit for purpose. Only bother the CAB when you need to and have the right people involved
- Simple risk calculation (use disaster recovery plans if you don’t know where to start)
- TEST, TEST and RETEST!
- Review and document your changes to improve what you do
I’ve had several customers come to me over the past few years complaining about one or more Runbooks showing that they are in a running state but they don’t show that there is any activity. Neither in the Log within the Runbook designer, nor in the console.
As you can see here the Runbook has been invoked and is in “Play” but there is no log data showing what step it is currently processing.
The thing that the Runbooks have in common are they are triggered from Service Requests within SCSM, usually from a Request Offering from the self-service portal.
On close inspection, it turns out that in passing properties to a Runbook the initialize data activity does not “Cleanse” the data and therefore any reserved characters are not protected by “” when used as input to the Runbook. So when a value gets passed that contains a character like &, > or <, the Orchestrator console try’s to interpret it as a command.
Don’t use &, < or > in any value that you pass to Orchestrator.
Within SCSM ensure any enumeration list or simple list that the end users may select from do not contain the &, < or > characters.
What gets harder is if the end user types this detail in to a free form text field. This you can prevent with a little >NET Regular Expression trickery.
On any text field that the end user will have free for them to enter text as they see fit that will be used to pass to a Runbook, use the following .NET Regular Expression filter to filter out any special characters:
I am very excited to be joining the team and to contribute to this amazing growth the company is seeing across the globe. I’ve worked with Cireson service & asset management solutions for over three years now, and I look forward to helping customers and partners make the most out of their Microsoft System Center investment in the APAC region.
Cireson has also continued their local investment by expanding the support team in to the Asia Pacific region. Joe Burrows has joined the support team and brings outstanding knowledge and experience with the Service Manager product as well as the Cireson products.
I am looking forward to working with the System Center more closely across Australia and the rest of the Asia Pacific Region.
Today I was working through some SCSM performance issues with a customer when I was alerted to this post by Thomas last year. It is a short but to the point Blog post that should make a significant difference to otherwise poor performing SCSM implementations.
I know I’ll certainly be using these tips in all future implementations.
A VIP solution is something I have been asked to incorporate in to SCSM designs by several clients in the past.
Other people have suggested solutions in the past and blogged about them. Travis Wright provided his solution here.
I have used derivations of Travis’s solution several times in the past and have continued to add to it to provide a more complete solution that not only looks good but provides simple business rules and customisable ways to identify who is and who isn’t a VIP.
This is a bit of a long post so I will try and break it in to sections.
The Management Pack
To edit any SCSM forms or classes we have to use the System Center 2012 R2 Service Manager Authoring Tool. This tool help us create the complex XML that we need to define what a class is and how the forms should be shown. I’ll not be going in to the details of what the classes are or why we are going to do a lot of what we are doing. Others are much smarter than I at explaining these things.
Next we have to find the existing class we want to edit so we can add the VIP value to the database and have it stored and usable by other components of SCSM.
From the list of all the classes the one we want to extend for our purpose is Domain User or Group. You can use the Search bar to search through the list or you can manually find it. Once found, we need to open it to view.
Right click the class and select View from the drop down list.
As the management pack that holds the Domain User or Group class is sealed, the extension can not be saved in to it. We therefore need to tell the authoring toolkit where to save these new properties.
Our nice clean Management Pack now looks like this:
This extension is a way of telling SCSM that it can take the existing one from the System Library Management Pack and add these extensions to it. Any time we do this the Management Pack that contains the class we are extending must be sealed. It is not possible to extend an unsealed Management Pack.
Now that we have created an extension the this class we can define the new properties we will need.
In the main working window, click the Create Property button.
The create property window will appear and ask for an Internal Name. The internal name is how the code will reference this new property and as such can not have any spaces. Enter a new name for the property, like this:
In the Details pane the new property should now appear.
Click the new property to select it
Properties can contain different types of data. For example: a property of First Name would need to contain a string of characters that are alpha numeric, but a property of Phone Number might contain just numbers. We can also assign a Date Time data type to store specific date or time formats and we can store True\False answers in a data type called Boolean.
As we just want to show if a user is a VIP or Not a VIP we can use the Boolean data type to show Yes\No or True\False. With the new property selected, in the properties pane change the type to Boolean.
The Domain User or Group class now has a new property that can record True or False answers for if a user is a VIP.
The next question is how do we see this property within the SCSM console and allow Analysts to set a user as a VIP. To do this we have to edit the forms….
The User Form
Just like the class property that we just created, form designs and layouts are saved in XML format. It is possible to edit the forms from and XML or text editor but this task is very difficult and does not show us the results in a live preview.
Instead we can use the SCSM Authoring Tool to edit the form in a graphical user interface so we can make the changes to the form in a WYSIWYG editor to know exactly what we are going to get from our end product.
There are two (2) forms that we need to edit. The first is the User CI form to allow analysts to be able to select a user as a VIP and second the Incident form to display if the user is a VIP or not. (We could then also edit any form that the Affected User field appears on, but to make this blog shorter we will only focus on the Incident form)
First we need to select the form we wish to add to, in our case the User CI form.
From the class browser pane we need to first select the Form Browser tab at that bottom of the pane. This then shows us all the forms we have to choose from.
With such a huge list it is easier to search for the form we want, so select the search text box and type “User” to look for any forms that are related to the user.
As you can see there is only one form that relates to the user.
Right click the User Form and select View from the drop down menu. This opens the form from the sealed management pack for us to view only and the Management Pack Explorer should look like this:
In the main editing pane we should also see the User CI form. This will be greyed out as it is just showing us what the form looks like to make sure we have the right one.
To edit this form click the Customize button.
Again, like with the user class, the customisations can not be saved in the sealed management pack that the form is currently in. So we have to select the unsealed management pack we created earlier.
Our management pack will now have a customised User Form in the list and the form browser is no longer greyed out so we can now make edits to it.
Many SCSM forms work on the idea of Panels that divide the form up to general areas to keep controls of like types together. One of the big problems that people have when editing forms is not being able to get to the panels as they are covered by controls that are set to stretch to fill the panel.
A way to get around this is to set a margin on one of the controls (Such as a label) so we can select the panel behind the controls.
In this case, if we select the “Display As” label we can set the Top margin to a higher value.
To set the margin, select the Details pane and manually enter a value for the TOP property in the Margin heading. Like this:
The form will then adjust to the new value and look like this:
To make it all look even we can then select the “User Name” label and set the same value for it’s top margin.
Now that we have made some room for our new VIP control we can add the check box we will use to display the VIP value.
Within the Authoring Toolkit windows there is a Form Customization Toolbox pane that contains all the type of controls we can add to a form. For our purposes we want to display just a check box to the analyst to display if the user is a VIP or not.
From the Toolbox Pane, click and drag the Check Box control on to the form just above the Display As label.
The area around the Display As and the User Name boxes will be highlighted when you are over the top of the Panel that these controls use. Release the left mouse button when the form looks like this:
As you can see it will drop the new check box in the stack panel but may not be exactly where we want it.
To adjust the location of the control it is best to allow the form rendering to be as automatic as possible. To allow this, we need to set as many properties to auto as we can.
Within the properties pane we can set the Top, Bottom, Left and Right margin settings all to 0. This will allow the stack panel to manipulate the control in line will all the other controls.
The other settings that will be set to default are the Height, Width, Maximum Height, Maximum Width, Vertical Alignment and Horizontal Alignment settings. You will start to get an idea on how these settings work in the form designer over time but for now the settings used on this form are:
Those settings give us a check box control that looks like this:
Within the details pane we can also set the label of the control to a value that will display to the analyst on the form. In this case VIP.
We now have to link the control to a property in the Domain User or Group class. This will be the new property that we created earlier VIP.
In the details pane select Binding Path and then select VIP from the list of class properties.
The user form is now complete.
It should look like this:
The Incident Form
The second form that we need to edit is the Incident form. We want to add a check box to display if the user is a VIP or not.
Like our previous edit we need to select the form we wish to add to, in this case the Incident form.
From the class browser pane we need to first select the Form Browser tab at that bottom of the pane. This then shows us all the forms we have to choose from.
If we type “Incident” in to the search field it reduces the number of items to a manageable level.
Right click on System.WorkItem.Incident.ConsoleForm and select View
Click Customize to make edits to this default form
Again, any edits to a form must be stored in a management pack that we can write to (Unsealed) so we are asked what management pack we want to save it to.
Select the VIP management pack that we are editing and click OK
On the Incident form the controls are laid out using what are are called Stack Panels.
Many SCSM forms work on the idea of Stack Panels that pretty much do as their name implies and “Stack” controls on top of each other.
In short the Stack Panel is a simple layout panel that automatically stacks elements within it below or beside each other, depending on its orientation. This makes creating any type of list or stacked controls very easy. All controls like Combo Box, List Box, Label, Text Box etc. can use a Stack Panel to organise their layout.
Like the panels in the previous form one of the big problems that people have when editing forms is not being able to get to the panels as they are covered by controls that are set to stretch to fill the panel.
A way to get around this is to set a margin on one of the controls (Such as a label) so we can select the panel behind the controls.
In this case, if we select the “Affected User” label we can set the Top margin to a higher value.
In this case I set the top margin to 25
As you can see we can then click on the white space just above the label and that allows us to select the Stack Panel so we can drag and drop new controls in to it.
You know you have selected the Stack Panel by looking at the top of the details pane. This can be useful as there is often many layers of panels or grouping boxes that may be involved with form design.
Now that the Stack Panel is exposed, click and drag a check box control to the Stack Panel.
You will then be able to click and drag the check box control within the Stack Panel and the Affected user label and user picker controls will automatically move up or down to make room for it. This is why Stack Panels are so useful.
Now we need to assign details to the check box.
|Binding Path||Affected User.VIP|
We want the Check Box to show but not for analysts to uncheck when creating an Incident. If you want analysts to be able to make that choice when they are creating an Incident, leave IsEnabled set to True.
Set the Binding Path to Affected User.VIP. This is the property we created earlier.
Unfortunately, you can not browse for this value as it will not appear in the authoring toolkit so the value will just have to typed in manually.
(you can get more informaiton on this issue here: https://blogs.technet.microsoft.com/servicemanager/2012/05/25/how-to-display-user-class-extended-properties-on-incident-form/)
The extension is now finished and can be saved and the Authoring Tool closed.
Before we go sealing the management pack for it is used in the SCSM environment, there is one more manual step that must be taken to make this solution work.
Again, an issue with the Authoring Toolkit is that it shows and records the Binding Path as the friendly name of the class extension “Affected User.VIP” but the XML must contain the Internal Name. In this case the only difference is the space between Affected and User.
To do this, open the saved XML management pack in your favorite editor of choice (Notepad++ is my favorite) and search for “Affected User” Anywhere you find this reference, change it to “AffectedUser” (No spaces)
Save the XML and move on to sealing the Management Pack.
Sealing The Management Pack
As described by TechNet:
There are two types of management packs:
- Sealed management packs: A sealed management pack (.mp file) cannot be modified.
- Unsealed management packs: An unsealed management pack (.xml file) can be modified.
Other than lists and forms, objects such as views that are defined in a sealed management pack cannot be customized. Customizing a list that is defined in a sealed management pack includes adding list items. Customizing a form that is defined in a sealed management pack includes adding fields.
You cannot unseal a management pack that is sealed. To modify objects that are stored in a management pack that you have already sealed, you can modify the original unsealed management pack file from which the sealed management pack was created. Alternatively, you can import the sealed management pack, and export it to a new unsealed management pack, that can be modified. After you import a sealed management pack, you cannot import the unsealed version of the same management pack until you delete the sealed version.
There are two (2) ways to seal a Management Pack.
- Using the Fast Seal (There is a great blog on this by Rob Ford of SCSMNZ.Net)
- Using the Authoring Toolkit (As described on TechNet)
For this blog I am going to use the Authoring toolkit.
To seal the Management Pack, click the File menu and select Seal Management Pack
Enter the output directory where the MP will be saved
Enter the Key File that you have to sign the MP.
(Check out this MSDN article on how to create a Key Pair if you don’t already have one)
Enter a Company Name
Now you’ve got an MP the only thing left to do is import it!
After restarting your SCSM console session you should now see the new VIP check box on the Incident form.
And if you open a User CI you should also see the VIP check box.
To test, select a user as a VIP and create a new Incident for them. The VIP check Box should appear checked on the Incident form.
What you do with this knowledge is up to you.
Within a Service Request a Query Control can be used to show a list of AD Users or groups for the end user to choose from.
To limit this list to just AD users that are currently Managers (Have Direct Reports) use the User (advanced) class and set the Criteria for the “Manages User” property to Object Status Is Not Null
Within a Service Request a Query Control can be used to show a list of AD Users or groups for the end user to choose from.
To limit this list to just AD users that are currently within a certain OU use the Active Directory User class and set the Criteria for the “Organizational Unit” property to Contains “<Name, or part thereof, OU name>”
The Exchange connector is a fantastic connector that allows Service Manager to connect with Microsoft Exchange and send and receive e-mail notifications and updates. The connector by itself is fairly simple but there are many services that it relies upon to function correctly.
If the Exchange connector does not seem to be working correctly follow these steps to troubleshoot the issue:
1) Logs. All errors thrown by the Exchange connector are routed to the OpsMgr log in the event log of the Service Manager server. These messages will show only errors and will not show any level of detail but it is the first port of call to find if there are errors occurring.
For example, You may get an error like “Exchange Connector: Error while processing emails for email address ‘ServiceDesk@Domain.internal’” This shows there has been an issue but not what caused the issue.
This type of error message is usually followed by an error in the “Health Service Modules” with a description like “A Windows Workflow Foundation workflow failed during execution.” this is because the Exchange connector workflow failed.
By adding the following registry keys it is possible to get more information reporting to the event log:
KEY: ”HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\System Center Service Manager Exchange Connector”
Value:EnableEWSTracing, Type: String, Data: 1 (1=on, 0=off)
Value:LoggingLevel, Type: String, Data: 7 (Verbose levels of 1 to 7 where 7 is the most verbose)
This will increase the amount of information returned from the Exchange connector and will allow for a more in-depth discovery of the issue.
3) Dependant Services. There are several services that must be in place and functioning before the Exchange connector will function properly. These are:
Work Flow Account:
Exchange Auto Discover: The auto discover service works by sending a request to the Exchange server with the username and password of the account attempting to be opened or configured and exchange will reply with it’s connection settings. The auto discover service can be verified by right-clicking the Outlook icon in the System Tray and select Test E-mail AutoConfiguration.
To verify the AutoConfiguration URL you can open the Exchange PowerShell command and enter:
Get-ClientAccessServer | Select *auto*
Look for the AutoDiscoverServiceInternalURI value. This is the autoconfiguration URL and this can be tested from a web browser. By default Exchange sets this to HTTPS however if the certificate is not trusted this can cause issues.
Exchange Web Service (EWS): With the AutoConfiguration service working the connector then sends a request to the Exchange Web Service (EWS) to retrieve the mailbox information. To test if the EWS is running browse to http(s)://<Exchange server FQDN>/ews/exchange.asmx. You should be prompted for user credentials and the resulting page should be blank without any errors or warnings.
If the Service Manager can not find the EWS URL it may need to be entered manually in the registry for Service Manager.
Key: “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\System Center Service Manager Exchange Connector”
Value: ExchangeURL, Type: String, Data: http(s)://<Exchange server FQDN>/ews/exchange.asmx