Tagged: Best Practice

Cireson Software Asset Management – Tracking Operating Systems

The question of tracking Operating Systems within the Cireson Asset Management solution came up the other day and I thought I’d put together a quick blog post to cover off why we would do this and more importantly how.

Why Track OS Versions in Asset Management?

First off, I think it is important to ask yourself why you would want to track Operating Systems within your organisation as it might not give you any useful metrics or data that would be useful in any way to us.

For example: If your organisation has an Enterprise Agreement with Microsoft that covers Windows for all of your PC’s then why do we need to report on it? If we know for sure that we are covered regardless of what version of the OS is used, then there is no useful reports that we can gain about licensing of OS’s.

However, we could get some reports about how our upgrades are going or if a particular threat is seen for a specific OS we could quickly report on what our exposure would be.

So the first thing that you really need to do is determine if it is worth tracking Operating Systems before investing time and effort in to setting these up.

How to Track OS Versions in Asset Management

If we have decided to track OS versions then we need to make sure we cover all OS’s that we want to track by creating Software Assets for each of the branches that we want to track.

For Example: If you are wanting to track just major versions (Windows 7, 8, 10) then it is possible to create a Software Asset for each of these without needing to go any lower level.

However, if you are trying to ensure workstations are up-to-date, then you will have to create a software asset for each SKU of Windows OS (e.g. Windows 10 Home, Windows 10 Enterprise)

Once all individual OS’s are tracked then I would also suggest creating two Software asset called “All Windows Desktop OS’s” and “All Windows Server OS’s”. These will have bundle rules for all of the OS’s so you can track licensing if you have a limited number of OS Licenses.

Below is a list of OS’s that could be tracked, but it would be up to the individual as to which ones to use.

Server OS’s

Microsoft Windows Server 2003 Enterprise Edition R2
Microsoft Windows Server 2003 Standard Edition
Microsoft Windows Server 2003 Standard Edition R2
Microsoft Windows Server 2003 Web Edition
Microsoft Windows Server 2008 Enterprise
Microsoft Windows Server 2008 R2 Enterprise
Microsoft Windows Server 2008 R2 Standard
Microsoft Windows Server 2008 Standard
Microsoft Windows Server 2012 Datacenter
Microsoft Windows Server 2012 R2 Datacenter
Microsoft Windows Server 2012 R2 Standard
Microsoft Windows Server 2012 Standard
Windows Server 2016 Datacenter
Windows Server 2016 Standard

Desktop OS’s

Microsoft Windows 10 Enterprise
Microsoft Windows 10 Pro
Microsoft Windows 7 Enterprise
Microsoft Windows 7 Professional
Microsoft Windows 7 Ultimate
Windows 7 Enterprise
Windows 7 Professional
Windows 7 Ultimate
Microsoft Windows 8 Enterprise
Microsoft Windows 8 Professional
Microsoft Windows 8.1 Enterprise
Microsoft Windows 8.1 Professional
Microsoft Windows Vista
Windows XP Professional

How to Enter OS Versions in Asset Management

Now all you have to do is enter these in the Cireson Asset Management and we are done right?

Not so fast.

We have a few options to play with here including an option that is “This is an OS”. Seems fairly obvious that we would select this right?

Not so much.

This option looks in a separate location of the ConfigMgr data instead of the Add or Remove Programs list, But the Windows OS is also recorded in the Add or Remove Programs list and can often have more detail, so it is better not to use this option.

Entering Software Assets one at a time can be a challenge and take a lot of time, so to make it easier, here is an Excel file filled with all the information you need to make this happen by importing via Cireson Asset Import, or Cireson Asset Excel.

ciresonosassets

Happy reporting.

Advertisements

How to use the Cireson Asset Import Connector

A little while ago on the Cireson Community Forum a member asked for more details on how the Cireson Asset Import Connector works. So I decided to write a blog post about it to clear up exactly what the connector is and how it works. I also recorded a short video for those of you who do not like long winded blog posts. You can find the video here.

The Cireson Asset Import Connector is one of the solutions contained within the Cireson Asset Management Stream of products and allows for Asset Administrators to take the guesswork out of importing external data into System Center Service Manager. This app allows any out-of-the-box CMDB data, or any information in the Cireson Asset Management app, to be imported from external CSV, SQL, ODBC or LDAP sources of truth, exposing an intuitive interface that provides the ability to map columns and schedule imports when required.

All little know pub quiz fact is that the Cireson Asset Import App grew from the CSV import app which was the very first Cireson app to hit the market. Next time this question comes up in a pub quiz, rest easy knowing that you now have the answer and are in a pub that is so cool it asks question like that one! 🙂

When you add the Cireson Asset Import app to a Service Manager environment, importing data becomes seamless. One-time imports and configuring XML files become a thing of the past. The straightforward app provides the organization with the ability to build an asset repository of information that is relevant and accurate when working with requests in Service Manager.

So lets get in to it… throughout the following post, I will call out important things to note and also what is generally regarded as “Best Practice” but always consider the requirements and impact these settings may have.

1. Creating a new Asset Import Connector

  1. Within the SCSM console, select the Administration workspace.
  2. Right click the Connectors Node.
  3. Select Create Connector from the drop down menu.
  4. Select Asset Management Import Connector from the sub menu.
 ami01
 ami02 NOTE:

The sub menu option for Asset Management Import Connector (Import) is for creating pre-created or backed up Import Connectors.

Enter a name for the connector that will make sense to other administrators for future maintenance tasks.

Select a Management Pack (or create a new one) that will be used to contain the workflow information required for the workflow of the connector.

 ami03
 ami04 Cireson Best Practice:

Best practice for creation of Management Packs is to create these Management Packs via the SCSM authoring tool and giving it an internal and full name in the format of “ – Asset management Import Connectors”.

This then assists to identify the Management Pack when exported or backed up at a later date.

The next step will be different depending on the input data source. Select and use one of the following sections below before continuing.

2. Using a CSV Source

After completing the steps in the section below, browse to the location of the .CSV file that contains the asset data to import and select the Encoding Format of the file.

The selected path can be either a local path (on the SCSM workflow server) or a network share that has read permissions by the Workflow account.

The first line of the CSV file must contain the header row information for the data contained within.

 ami05
 ami04 Cireson Best Practice:

It is Cireson best practice to create a single folder that contains all the CSV import files for any connector that is being used. It is also best to configure the connectors to use a UNC path as the location path of the file selected as this allows the connector to be edited successfully from other computers.

 Continue the connector settings.

 3. Using a SQL Source

For Microsoft SQL Server data source:

Enter the SQL Connection string by clicking the ellipse button and entering the required connection information.

 ami02 NOTE:

If Windows Authentication is to be used, the SCSM Workflow account must have read access to the source database.

Enter the SQL query that will be used to extract the data required for this connector.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The SQL Query Results field will show the number of row returned if the query was successful.

 ami06
Continue the connector settings.

4. Using a ODBC Source

For ODBC Server data source:

Create a File Data Source Name (DSN) that contains the Server, Database and username for the data source.

Browse the file system and select the File DSN.

 ami02 NOTE:

The SCSM Workflow account must have read access to the File DSN.

Enter the File DSN Password for the username within the File DSN.

Enter the SQL query that will be used to extract the data required for this connector.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The SQL Query Results field will show the number of row returned if the query was successful.

 ami07ami08
Continue the connector settings.

 5. Using an LDAP Source

For an LDAP data source:

Enter the LDAP Server or Namespace and the LDAP Port (If required).

If the SCSM Workflow account does not have read access to the LDAP source, enter alternative credentials with the required rights.

Enter the LDAP Attributes that are required to be returned separated by commas.

Enter an LDAP search starting path to reduce the search scope as required.

Enter any LDAP Filter needed to refine the results to the specific required data.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The LDAP Query Result field will show the number of row returned if the query was successful.

 ami09ami10
Continue the connector settings.

6. Connector Settings

Select the target class that the records will be imported in to. This might be one of the base classes (Such as Hardware Asset) or, if other relationships are required, selecting a combination class (Type Projection) that contains the relationships required for the import.

Enter a Workflow log path to track import results and reporting on success\failure.

 ami11
Set the required options for the instance of the Asset Import connector. See below for more details on these options.

Once all options are selected, click Next.

 ami12
Asset Import Connector Options:

Test Mode The connector will run and create log file for inspection without commiting any changes to the SCSM database.
This connector can create new items When enabled, this option will allow the connector to create new records within the database.

This is used to allow the import of new records.

This connector can update existing items When enabled, this option will allow the connector to update existing records that match the key fields the selected class.
This connector will DELETE ALL matching items only This option changes the behaviour from creation to deleting of records. Any record matched from the import data to an instance of the class will be removed from the SCSM database.

WARNING! If data is deleted it can not be recovered.

This connector will update multiple existing items matching specific custom keys
Do not replace \n with a linefeed By default, the improt connector will interperate any \n text as representing a new line and therefore will replcae it with a linefeed character within SQL.

7. Mapping Fields

Data Mappings allow the mapping of the specified input data to the properties of the selected target class within SCSM.

On the Data Mapping screen, if the option for “This connector will update multiple existing items matching apecific custom keys” is selected on the previous screen the first option that will show is for Custom Keys. Custom Keys are used to fins all existing matching items and update them as normal via the mappings below. At least one custom key is required.

The Custom Key can be any of the properties for the class that was selected for this connector.

Add the custom keys as required and map these to the data from the import source.

 ami13
 ami02 NOTE:

All Key Properties for the selected class as well as any Custom Keys are required fields and must be mapped to continue.

The property displayed in the left column will show all properties of the selected class, along with any extended properties that have been added for the class.

The Data Type in the middle column will show what input data type the property will expect. String (Key) identifies the primary key for the selected class.

The Mapped To value displayed in the right column will show drop-down values for each available column header from the specified source

The Hardware Asset ID should be mapped to the primary key selection you chose in the Asset Management Settings. (Serial Number, Asset Tag, GUID, etc.)

Map all additional properties to the input data that is defined from the Input source.

Any properties that are mapped will be updated or entered as defined.

Any properties that are not mapped will not be updated.

 ami14
If a Combination Class is selected for the connector there will be additional mapping fields under the Relationship heading.

These can be used to map data from multiple classes together as relationships as required.

 ami15
Once all mappings are complete, click Next.

8. Connector Workflow Schedule

Some connectors will be run as a once off to import bulk data in to the SCSM database, whereas others might be run on a schedule to keep other data sources up-to-date within the database.

An example of a scheduled data source might be a connector in to a Mobile Device Management (MDM) solution or an accounting or purchase system (for invoices and Purchase Orders).

For connectors that will be only run once, select the option marked This connector will be run manually.

When using this option, a warning message will be displayed to remind administrators that the connector will only run when using the Synchronize Now task within the console.

For a reoccurring schedule, enter the frequency as either daily or as a regular reoccurrence with a set frequency.

Ensure the Connector Enabled option is enabled to all ow the connector to run. This option may help with the administration of the connector at a later date if it needs to be turned off for a period of time for maintenance or fault finding.

 ami16
When the scheduling information has been entered, click Create.  ami17

9. Manually Running a Connector

Once a connector has been created it will show within the Connectors node in the Administration workspace of the SCSM console. Within this node, administrators are able to see the current status of all connectors, when they were last started and finished and their percentage complete.

Administrators are also able to manually run a connector to either force the synchronization regardless of workflow schedule or to trigger a non-repeating connector.

To manually run a connector:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

 ami18
Select the Connector to be run and click the Synchronize Now task within the tasks pane.  ami19
If the connector does not have a schedule set (is disabled) then a message will appear informing that the connector is disabled and asking if it should still be run.

Click Yes to run the Synchronization.

 ami20
The connector workflow will then be scheduled to start at the next opportunity for the workflow engine.

10. Exporting and Importing a Connector

Once a connector has been configured the settings can be exported to allow administrators to copy the connector to a different environment (dev to prod).

To export and import a connector:

Within the environment to export from:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

 ami21
Select the Connector to be run and click the Export task within the tasks pane.

Save the connector XML file to a path and click Save.

 ami22
Within the environment to import in to:

On the Connectors node, select Create Connector from the drop down menu.

Select Asset Management Import Connector (Import) from the sub menu.

Browse to the folder containing the exported XML file, select the xml file to import and click OK.

 ami23
A window will appear to rename the Connector from its original name if required and change the Management Pack that holds the information.

If the connector is importing from a CSV file, an additional field will appear that is used to provide the source location of the CSV file required.

Enter the values needed and click OK.

 ami24
The connector will be imported and will now appear in the connectors node.

11. Deleting a Connector

If a connector is no longer needed, then it can be removed from the SCSM environment by deleting the connector from the console.

To delete a connector:

Within the environment to export from:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

 ami25
Click the Delete task from the tasks pane on the right of the screen.

Click OK on the message that appears to confirm the connector to be deleted.

The connector has previously imported data a second message will appear asking if the data that was imported from the connector should be deleted.

 ami26

Hope this gives you a clear idea of how this app comes together and works for your organization.

Leave a comment if you have any additional questions.

 

An ITIL Change Management Checklist (Best Practices to Avoid Common Pitfalls)

In the last week I’ve been doing a couple of presentations on Change Management and where to start for businesses. This post will be talking about the IT Service Management life-cycle and most importantly delivering services to our end users, or customers, that are successful and have little to no negative impact on business continuity during its deployment and also reduce business risk wherever possible.

This post will be focusing on Change Management and where to start with it, what are best practices and how do we make it easier on ourselves.

To kick off, I think it is important that we have a clear idea of what a change is and why change management is important.

“A change is defined by ITIL as the Addition, Modification or Removal of anything that could have an effect on IT services.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)

Now I would make one slight modification to this statement and replace IT Services with Business Services.

Why should we restrict the amazing work we are doing to just IT?

ITIL also tells us that “Change Management provides value by promptly evaluating and delivering the changes required by the business, and by minimizing the disruption and rework caused by failed changes.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)

Some of the key words that we should all keep a focus on are “Promptly” evaluating, “Minimizing” disruption to end users and also “Minimizing Rework” that is required when a change fails.

There are some common challenges that people have when looking at Change Management for the first time, or looking at improving their Change Management processes. Here are three of the main challenges that I see when discussing Change Management with clients:

Stuck with someone else’s mess

Many people fail before they even start because they are buried in a mess created before they arrived. Either because of a failed attempt to get change management implemented or just a complicated system that has always existed.

And as we know many systems are just maintained because “That’s the way it’s always been done”.

Getting buy in from the entire business is important. Having the business behind a push for better change management will enable you to wipe the slate clean and build something less complex and more targeted to the business.

Not sure where to start

Change management can be a big beast and can get people all bent out of shape knowing they will have to follow some sort of formalized process.

However, as we will see, there is no need for Change management to be as complex as people think it will be.

It’s Too Complex

Yes, this would have to be my personal number one bug bear with some change management processes.

But as we mentioned earlier ITIL tells us the change management should “Provide value by promptly evaluating and delivering the changes…..”

So if a change management process is taking to long or is an arduous process then we know we have it wrong.

Too many fingers in the pie

This is an oversimplification of this point.

What I’m trying to explain here is that many groups or sub groups that we work with do have set procedures that they use and are at varying levels of maturity and quite often these independent groups think they have it right and want to take care of it all themselves.

However, these processes are often independent of each other and can get in each other’s way or “rework” the same information or data several times.

Imagine if you will that ever car manufacturer had their own set of road rules. Individually these rules may work and may be a perfect procedure for that car manufacturer. That’s all well and good until we all start sharing the same road.

Then, we have chaos.

Even though everyone is following a set and tested procedure if we don’t take in to consideration all the other systems that we have within our business then we see conflicting issues and changes that were doomed to fail.

Specifically in IT, as our systems become ever more complex these issues occur on an all to frequent basis.

I’m sure everyone has an example of where a minor change to one system had a catastrophic outcome for some unrelated system that no one new about or had not considered.

Good change management can reduce the amount of time spent on unplanned work but it has to be effective.

Bad change management will just add an administration layer to the firefighting we always do.

This is both a waste of time and does not reduce the amount of unplanned work we have.

From what we have talked about so far there are some basic rules we can stick to that will help guide us to a good Change Management process.

Promptly is the key

If a process takes too long then no one is going to want to follow it. High risk issues are always going to take longer but there is no need to drag our feet where we don’t need to.

Low risk issues should be able to be speedily processed and maybe even automatically approved.

Which leads us to our next point,

Fit for Purpose

There is no need to bother your CAB with basic routine. If the CAB can clearly define what they require for a basic low risk change then make sure your process hits that and move on.

CAB have bigger fish to fry and more risk to deal with.

So why not have a simple process for Low risk changes. One Change Manager to review then do the change. SIMPLE!

How do we make sure that we capture these key points?

Standardization

Create templates where possible. Inject data we already know so people don’t have to guess at values or (like I am sure we have all seen) people just making up values.

It is more important to be able to get a consistent and complete result than it is to get the perfect result. Consistency allows us to report and see where we are doing well, and where we can improve.

More processes to make all this happen is NOT the solution. Often less is more when it comes to these processes.

We can all think of a change that we SHOULD do but never quite get around to it. How about rebooting a server? Depending on the server this could be low risk, minimal impact, not worth going to CAB over….  But should it be a change?

Remember a change is defined as “…the Addition, Modification or Removal of anything that could have an effect on IT services.”

Well why not have a change process that accepts that a low risk server can be rebooted without CAB approval, just so long as it is recorded?

Why not automate it?!

Of course none of this is any good if we don’t know the risk.

More specifically, Correct Risk.

So what is the best way to assign risk to our IT Services?

This is a big topic that usually takes many committees and sub committees, revisions, arguments, another committee and more arguments.

There is a much simpler way to assign risk to an IT service and you most probably have already done it. D.R.

We classify Disaster Recovery in most organisations. We have had the argument and we have spent the money to make sure those “Business Critical” systems have good quality D.R.

If you are like most organizations I’ve worked for you will have gone through the process of “What do we cover with DR?

And we start by including EVERYTHING.

We then get the quote back from a vendor on what that DR solution would cost and we quickly write off half a dozen services without even thinking.

And again and again we go until we have a DR solution that covers our Business Critical systems.

Guess what? They are High risk.

Some systems we have internal HA solutions for. Maybe SCSM with 2 Management servers.

Not Critical….  We could live off paper and phone calls for a few hours or even days without it….   Let’s say medium risk.

Then we have everything else. Low risk.

Simple. Why over complicate it?

So all the theory in the world is all well and good but what are some real world examples? Well I’m glad you asked.

I wanted to show you a range of items that had a common theme but ranged in complexity and risk. That way I can demonstrate to you the way it can be done in the real world with a real scenario.

What better scenario than our own products.

However, there is no reason that these examples could not be easily translated to ANY system that you may have in your organisation.

Our first example is the Cireson Essentials Apps. Like Advanced Send E-mail, View Builder, and Tier Watcher.

These can be classified as “Low Risk” because only IT Analysts use them and IT Analysts can do their job without them, even though it would take more effort to do the job, they could still work.

Second is the Self Service portal. This effects more than just the IT Analysts and the impact on end users being able to view and update their Work Items and unable to use Self Service. But, all of this can still be done via phone or e-mail, although this will take longer and not be as simple for end users.

Finally, asset management is a high risk change that would happen to our system. Asset Management affects a wide range of systems and impacts SR’s as well as support details that analysts use.

In addition, the upgrade of AM is not a simple task. During the upgrade there are reports, cubes, management packs, DLL’s and PC client installs that are required.

So let’s take a look at what this looks like in the real world.

So when creating a change management process surely there are some simple steps we can follow to get the ball rolling.

Here is what I like to tell people are the 4 key pieces of a successful change management practice.

Less Process

Keeping the process as simple and as prompt as possible. This can be started by creating basic process templates for Low, Medium and high risk changes. There is always going to be exceptions that we have to add a little more process for but wherever possible stick to the standard three basic templates.

TEST!

The number 1 reason for failure of changes that I’ve ever been involved in is testing. There is nothing like the old “Worked fine in my lab…   “ line.

The amount of rework or “Unplanned” work that stems from an untested change is huge and even just a little testing can catch big issues

Get the right people involved

We are not always experts in what a system is, what is does or how it should work.

How many times has your testing for an application package been to install it and if it installs without an error, it must be good?

What if when an end user logs on the whole thing crashes?

So even getting end users involved in your testing of minor changes can be a huge benefit.

And finally….

Review

So many places I see never have a formal review process.

These are critical for making sure the processes don’t stray from a standardized approach and that we know that all the obvious things are right and the whole thing is not going to blow up when we hit go.

Just reviewing the failures to find what went wrong is not enough.

It is also important that the changes that went right and fed back in to future decisions to avoid the ones that go wrong. This feedback should find it’s way back in to change templates AND base documentation for the systems so we keep all our processes up-to-date.

These don’t have to be long but these reviews can identify ways to make the process simpler, where a template can be created or where the CAB can be cut out of the picture entirely!

One fantastic question I had recently was “How many changes should we temple?
This is a great question as many people think that templating everything they do is a waste of time. This is not the case.
If you have a change that you do on a recurring basis (not a once off) even if it only occurs once every six months or year, (In fact I’d argue especially if it is only once or twice a year) it is worth templating for two main reasons:

  • Does anyone remember the correct process for the change?
    Often a single person is responsible for a change and all the knowledge can be locked up in their head. By templating processes this way, we can ensure that the knowledge is available to everyone so if the original person is no longer available the change can still be a success.
  • Was the process successful last time we ran it and if not, what went wrong so we don’t do it again?
    If you are only doing a change once or twice a year a template is a great way of making sure that lessons learnt are never lost and mistakes are worked out of the process.

A standard approach might include a set of standard activities that are carried across all risk levels, but we just keep adding to the process as we move up the risk profile. A basic template might look something like this:

CR Standardization

The above example is just an example. It is by no way prescriptive guidance that you should follow religiously, but more of a starting point. There is always going to be those changes that require more steps as the change is more complex. What is important is the basics are followed and a standard approach is followed to encompass the key points outlined in this article.

So to sum this all up in one paragraph:

  • Prompt and Simple Process. Make it quick and simple
  • Standardize ALL changes to a simple set of rules and create templates
  • Make sure your changes are fit for purpose. Only bother the CAB when you need to and have the right people involved
  • Simple risk calculation (use disaster recovery plans if you don’t know where to start)
  • TEST, TEST and RETEST!
  • Review and document your changes to improve what you do