Category: SCSM

Hidden SCSM Console Shortcuts

After working with SCSM for 6 years now, I thought that there was pretty much no new surprises left for me within this product that, lets face it, gets new features about as often as politicians do something right.

So it was with much celebration and rejoicing that I was informed of a hidden trick with in the SCSM Console that we all love to hate.

A good friend and fellow SCSM tragic Shayne Ray contacted me today to share what he found.

While doing some work jumping from the SCSM Console to the Cireson Service Manager portal, Shayne hit Ctrl+F5 to refresh the browser however, the focus at the time was on the SCSM console and he found something remarkable. A quick search around the interwebs finds a few mentions of it from others but nothing official from Microsoft, so I thought I’d do a quick write up of it all.

While in the console, any location, if the analyst hits any of the following combination of keys, the following actions are invoked:

  • Ctrl+F1 – Opens a new default Incident form
  • Ctrl+F2 – Opens a new Incident from a template
  • Ctrl+F3 – Opens a new Request Offering from a template
  • Ctrl+F4 – Opens a new Service Request from a template
  • Ctrl+F5 – Opens a new Change Request from a template
  • Ctrl+T – Hides or shows Tasks pane
  • Ctrl+F – Opens the Advanced Search window
  • Ctrl+D – Hides or Shows the Details Pane
  • Ctrl+1 – Selects the Administration Workspace
  • Ctrl+2 – Selects the Library Workspace
  • Ctrl+3 – Selects the Work Items Workspace
  • Ctrl+4 – Selects the Configuration Items Workspace
  • Ctrl+5 – Selects the Data Warehouse Workspace
  • Ctrl+6 – Selects the Reporting Workspace
  • Alt+F1 – Hides or Shows the Navigation pane

You learn something new every day! 🙂

Cireson Software Asset Management – Tracking Operating Systems

The question of tracking Operating Systems within the Cireson Asset Management solution came up the other day and I thought I’d put together a quick blog post to cover off why we would do this and more importantly how.

Why Track OS Versions in Asset Management?

First off, I think it is important to ask yourself why you would want to track Operating Systems within your organisation as it might not give you any useful metrics or data that would be useful in any way to us.

For example: If your organisation has an Enterprise Agreement with Microsoft that covers Windows for all of your PC’s then why do we need to report on it? If we know for sure that we are covered regardless of what version of the OS is used, then there is no useful reports that we can gain about licensing of OS’s.

However, we could get some reports about how our upgrades are going or if a particular threat is seen for a specific OS we could quickly report on what our exposure would be.

So the first thing that you really need to do is determine if it is worth tracking Operating Systems before investing time and effort in to setting these up.

How to Track OS Versions in Asset Management

If we have decided to track OS versions then we need to make sure we cover all OS’s that we want to track by creating Software Assets for each of the branches that we want to track.

For Example: If you are wanting to track just major versions (Windows 7, 8, 10) then it is possible to create a Software Asset for each of these without needing to go any lower level.

However, if you are trying to ensure workstations are up-to-date, then you will have to create a software asset for each SKU of Windows OS (e.g. Windows 10 Home, Windows 10 Enterprise)

Once all individual OS’s are tracked then I would also suggest creating two Software asset called “All Windows Desktop OS’s” and “All Windows Server OS’s”. These will have bundle rules for all of the OS’s so you can track licensing if you have a limited number of OS Licenses.

Below is a list of OS’s that could be tracked, but it would be up to the individual as to which ones to use.

Server OS’s

Microsoft Windows Server 2003 Enterprise Edition R2
Microsoft Windows Server 2003 Standard Edition
Microsoft Windows Server 2003 Standard Edition R2
Microsoft Windows Server 2003 Web Edition
Microsoft Windows Server 2008 Enterprise
Microsoft Windows Server 2008 R2 Enterprise
Microsoft Windows Server 2008 R2 Standard
Microsoft Windows Server 2008 Standard
Microsoft Windows Server 2012 Datacenter
Microsoft Windows Server 2012 R2 Datacenter
Microsoft Windows Server 2012 R2 Standard
Microsoft Windows Server 2012 Standard
Windows Server 2016 Datacenter
Windows Server 2016 Standard

Desktop OS’s

Microsoft Windows 10 Enterprise
Microsoft Windows 10 Pro
Microsoft Windows 7 Enterprise
Microsoft Windows 7 Professional
Microsoft Windows 7 Ultimate
Windows 7 Enterprise
Windows 7 Professional
Windows 7 Ultimate
Microsoft Windows 8 Enterprise
Microsoft Windows 8 Professional
Microsoft Windows 8.1 Enterprise
Microsoft Windows 8.1 Professional
Microsoft Windows Vista
Windows XP Professional

How to Enter OS Versions in Asset Management

Now all you have to do is enter these in the Cireson Asset Management and we are done right?

Not so fast.

We have a few options to play with here including an option that is “This is an OS”. Seems fairly obvious that we would select this right?

Not so much.

This option looks in a separate location of the ConfigMgr data instead of the Add or Remove Programs list, But the Windows OS is also recorded in the Add or Remove Programs list and can often have more detail, so it is better not to use this option.

Entering Software Assets one at a time can be a challenge and take a lot of time, so to make it easier, here is an Excel file filled with all the information you need to make this happen by importing via Cireson Asset Import, or Cireson Asset Excel.

ciresonosassets

Happy reporting.

How to use the Cireson Asset Import Connector

A little while ago on the Cireson Community Forum a member asked for more details on how the Cireson Asset Import Connector works. So I decided to write a blog post about it to clear up exactly what the connector is and how it works. I also recorded a short video for those of you who do not like long winded blog posts. You can find the video here.

The Cireson Asset Import Connector is one of the solutions contained within the Cireson Asset Management Stream of products and allows for Asset Administrators to take the guesswork out of importing external data into System Center Service Manager. This app allows any out-of-the-box CMDB data, or any information in the Cireson Asset Management app, to be imported from external CSV, SQL, ODBC or LDAP sources of truth, exposing an intuitive interface that provides the ability to map columns and schedule imports when required.

All little know pub quiz fact is that the Cireson Asset Import App grew from the CSV import app which was the very first Cireson app to hit the market. Next time this question comes up in a pub quiz, rest easy knowing that you now have the answer and are in a pub that is so cool it asks question like that one! 🙂

When you add the Cireson Asset Import app to a Service Manager environment, importing data becomes seamless. One-time imports and configuring XML files become a thing of the past. The straightforward app provides the organization with the ability to build an asset repository of information that is relevant and accurate when working with requests in Service Manager.

So lets get in to it… throughout the following post, I will call out important things to note and also what is generally regarded as “Best Practice” but always consider the requirements and impact these settings may have.

1. Creating a new Asset Import Connector

  1. Within the SCSM console, select the Administration workspace.
  2. Right click the Connectors Node.
  3. Select Create Connector from the drop down menu.
  4. Select Asset Management Import Connector from the sub menu.
 ami01
 ami02 NOTE:

The sub menu option for Asset Management Import Connector (Import) is for creating pre-created or backed up Import Connectors.

Enter a name for the connector that will make sense to other administrators for future maintenance tasks.

Select a Management Pack (or create a new one) that will be used to contain the workflow information required for the workflow of the connector.

 ami03
 ami04 Cireson Best Practice:

Best practice for creation of Management Packs is to create these Management Packs via the SCSM authoring tool and giving it an internal and full name in the format of “ – Asset management Import Connectors”.

This then assists to identify the Management Pack when exported or backed up at a later date.

The next step will be different depending on the input data source. Select and use one of the following sections below before continuing.

2. Using a CSV Source

After completing the steps in the section below, browse to the location of the .CSV file that contains the asset data to import and select the Encoding Format of the file.

The selected path can be either a local path (on the SCSM workflow server) or a network share that has read permissions by the Workflow account.

The first line of the CSV file must contain the header row information for the data contained within.

 ami05
 ami04 Cireson Best Practice:

It is Cireson best practice to create a single folder that contains all the CSV import files for any connector that is being used. It is also best to configure the connectors to use a UNC path as the location path of the file selected as this allows the connector to be edited successfully from other computers.

 Continue the connector settings.

 3. Using a SQL Source

For Microsoft SQL Server data source:

Enter the SQL Connection string by clicking the ellipse button and entering the required connection information.

 ami02 NOTE:

If Windows Authentication is to be used, the SCSM Workflow account must have read access to the source database.

Enter the SQL query that will be used to extract the data required for this connector.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The SQL Query Results field will show the number of row returned if the query was successful.

 ami06
Continue the connector settings.

4. Using a ODBC Source

For ODBC Server data source:

Create a File Data Source Name (DSN) that contains the Server, Database and username for the data source.

Browse the file system and select the File DSN.

 ami02 NOTE:

The SCSM Workflow account must have read access to the File DSN.

Enter the File DSN Password for the username within the File DSN.

Enter the SQL query that will be used to extract the data required for this connector.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The SQL Query Results field will show the number of row returned if the query was successful.

 ami07ami08
Continue the connector settings.

 5. Using an LDAP Source

For an LDAP data source:

Enter the LDAP Server or Namespace and the LDAP Port (If required).

If the SCSM Workflow account does not have read access to the LDAP source, enter alternative credentials with the required rights.

Enter the LDAP Attributes that are required to be returned separated by commas.

Enter an LDAP search starting path to reduce the search scope as required.

Enter any LDAP Filter needed to refine the results to the specific required data.

Click Execute Query to test the query and gather field name requirements for class property mapping.

The LDAP Query Result field will show the number of row returned if the query was successful.

 ami09ami10
Continue the connector settings.

6. Connector Settings

Select the target class that the records will be imported in to. This might be one of the base classes (Such as Hardware Asset) or, if other relationships are required, selecting a combination class (Type Projection) that contains the relationships required for the import.

Enter a Workflow log path to track import results and reporting on success\failure.

 ami11
Set the required options for the instance of the Asset Import connector. See below for more details on these options.

Once all options are selected, click Next.

 ami12
Asset Import Connector Options:

Test Mode The connector will run and create log file for inspection without commiting any changes to the SCSM database.
This connector can create new items When enabled, this option will allow the connector to create new records within the database.

This is used to allow the import of new records.

This connector can update existing items When enabled, this option will allow the connector to update existing records that match the key fields the selected class.
This connector will DELETE ALL matching items only This option changes the behaviour from creation to deleting of records. Any record matched from the import data to an instance of the class will be removed from the SCSM database.

WARNING! If data is deleted it can not be recovered.

This connector will update multiple existing items matching specific custom keys
Do not replace \n with a linefeed By default, the improt connector will interperate any \n text as representing a new line and therefore will replcae it with a linefeed character within SQL.

7. Mapping Fields

Data Mappings allow the mapping of the specified input data to the properties of the selected target class within SCSM.

On the Data Mapping screen, if the option for “This connector will update multiple existing items matching apecific custom keys” is selected on the previous screen the first option that will show is for Custom Keys. Custom Keys are used to fins all existing matching items and update them as normal via the mappings below. At least one custom key is required.

The Custom Key can be any of the properties for the class that was selected for this connector.

Add the custom keys as required and map these to the data from the import source.

 ami13
 ami02 NOTE:

All Key Properties for the selected class as well as any Custom Keys are required fields and must be mapped to continue.

The property displayed in the left column will show all properties of the selected class, along with any extended properties that have been added for the class.

The Data Type in the middle column will show what input data type the property will expect. String (Key) identifies the primary key for the selected class.

The Mapped To value displayed in the right column will show drop-down values for each available column header from the specified source

The Hardware Asset ID should be mapped to the primary key selection you chose in the Asset Management Settings. (Serial Number, Asset Tag, GUID, etc.)

Map all additional properties to the input data that is defined from the Input source.

Any properties that are mapped will be updated or entered as defined.

Any properties that are not mapped will not be updated.

 ami14
If a Combination Class is selected for the connector there will be additional mapping fields under the Relationship heading.

These can be used to map data from multiple classes together as relationships as required.

 ami15
Once all mappings are complete, click Next.

8. Connector Workflow Schedule

Some connectors will be run as a once off to import bulk data in to the SCSM database, whereas others might be run on a schedule to keep other data sources up-to-date within the database.

An example of a scheduled data source might be a connector in to a Mobile Device Management (MDM) solution or an accounting or purchase system (for invoices and Purchase Orders).

For connectors that will be only run once, select the option marked This connector will be run manually.

When using this option, a warning message will be displayed to remind administrators that the connector will only run when using the Synchronize Now task within the console.

For a reoccurring schedule, enter the frequency as either daily or as a regular reoccurrence with a set frequency.

Ensure the Connector Enabled option is enabled to all ow the connector to run. This option may help with the administration of the connector at a later date if it needs to be turned off for a period of time for maintenance or fault finding.

 ami16
When the scheduling information has been entered, click Create.  ami17

9. Manually Running a Connector

Once a connector has been created it will show within the Connectors node in the Administration workspace of the SCSM console. Within this node, administrators are able to see the current status of all connectors, when they were last started and finished and their percentage complete.

Administrators are also able to manually run a connector to either force the synchronization regardless of workflow schedule or to trigger a non-repeating connector.

To manually run a connector:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

 ami18
Select the Connector to be run and click the Synchronize Now task within the tasks pane.  ami19
If the connector does not have a schedule set (is disabled) then a message will appear informing that the connector is disabled and asking if it should still be run.

Click Yes to run the Synchronization.

 ami20
The connector workflow will then be scheduled to start at the next opportunity for the workflow engine.

10. Exporting and Importing a Connector

Once a connector has been configured the settings can be exported to allow administrators to copy the connector to a different environment (dev to prod).

To export and import a connector:

Within the environment to export from:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

 ami21
Select the Connector to be run and click the Export task within the tasks pane.

Save the connector XML file to a path and click Save.

 ami22
Within the environment to import in to:

On the Connectors node, select Create Connector from the drop down menu.

Select Asset Management Import Connector (Import) from the sub menu.

Browse to the folder containing the exported XML file, select the xml file to import and click OK.

 ami23
A window will appear to rename the Connector from its original name if required and change the Management Pack that holds the information.

If the connector is importing from a CSV file, an additional field will appear that is used to provide the source location of the CSV file required.

Enter the values needed and click OK.

 ami24
The connector will be imported and will now appear in the connectors node.

11. Deleting a Connector

If a connector is no longer needed, then it can be removed from the SCSM environment by deleting the connector from the console.

To delete a connector:

Within the environment to export from:

Within the SCSM console, select the Administration workspace.

Select the Connectors node.

 ami25
Click the Delete task from the tasks pane on the right of the screen.

Click OK on the message that appears to confirm the connector to be deleted.

The connector has previously imported data a second message will appear asking if the data that was imported from the connector should be deleted.

 ami26

Hope this gives you a clear idea of how this app comes together and works for your organization.

Leave a comment if you have any additional questions.

 

Getting More From ConfigMgr For SCSM

We all love Microsoft’s System Center Configuration Manager and the vast majority of the industry loves it too. As Microsoft have recently announced over 50Million end points are now covered by just the latest current branch build (1610) https://blogs.technet.microsoft.com/enterprisemobility/2016/11/18/configmgr-current-branch-surpasses-50m-managed-devices/?Ocid=C+E%20Social%20FY17_Social_TW_MSFTMobility_20161128_685262993

The amount of data points that are returned from the ConfigMgr client is huge and can be exceptionally useful when diagnosing issues or tracking down what is deployed in an organization.

However, out of the box, the data is limited to what Microsoft deem necessary. While this is fine for much of the time every now and then there is a requirement to find more or different information to track things that are not in the standard hardware inventory report.

A great example that was asked for recently is Monitors.

Some organizations want to be able to track monitors with their PC’s and therefore their locations etc.

What many people do not realise is that a monitors cable (even VGA) passes very basic information back to the PC. This data can contain a bunch of data that is relevant to the monitor such as:

  • Manufacturer
  • Model
  • Serial Number
  • Etc.

This data is an industry standard called Extended Display Identification Data (EDID). This data is in a consistent format so this allows us to be able to retrieve this data in a consistent way.

Once we retrieve the data we can use it to identify what Monitor is currently plugged in. All we then have to do is get the Configuration Manager client to return the data as part of the standard hardware inventory cycle.

Step 1: Storing the EDID Data Somewhere Locally

This step takes the EDID data and places it in to a location that we can simply retrieve via the ConfigMgr client.

To achieve this, we need to get the client to interrogate the monitor for the EDID information then save the data to an easy to retrieve location, such as the WMI of the local machine.

To do this, we use PowerShell.

Here is the code you will need:
Test this script before you use it in prod. The script is provided as is and is not supported. (The usual drill)

# Reads the 4 bytes following $index from $array then returns them as an integer interpreted in little endian
function Get-LittleEndianInt($array, $index) {

# Create a new temporary array to reverse the endianness in
$temp = @(0) * 4
[Array]::Copy($array, $index, $temp, 0, 4)
[Array]::Reverse($temp)

# Then convert the byte data to an integer
[System.BitConverter]::ToInt32($temp, 0)
}

# Creates a new class in WMI to store our data including fields for each of the data points that we can return
function Create-Wmi-Class() {
$newClass = New-Object System.Management.ManagementClass(“root\cimv2”, [String]::Empty, $null);
$newClass[“__CLASS”] = “MonitorDetails”;
$newClass.Qualifiers.Add(“Static”, $true)
$newClass.Properties.Add(“DeviceID”, [System.Management.CimType]::String, $false)
$newClass.Properties[“DeviceID”].Qualifiers.Add(“key”, $true)
$newClass.Properties[“DeviceID”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“ManufacturingYear”, [System.Management.CimType]::UInt32, $false)
$newClass.Properties[“ManufacturingYear”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“ManufacturingWeek”, [System.Management.CimType]::UInt32, $false)
$newClass.Properties[“ManufacturingWeek”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“DiagonalSize”, [System.Management.CimType]::UInt32, $false)
$newClass.Properties[“DiagonalSize”].Qualifiers.Add(“read”, $true)
$newClass.Properties[“DiagonalSize”].Qualifiers.Add(“Description”, “Diagonal size of the monitor in inches”)
$newClass.Properties.Add(“Manufacturer”, [System.Management.CimType]::String, $false)
$newClass.Properties[“Manufacturer”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“Name”, [System.Management.CimType]::String, $false)
$newClass.Properties[“Name”].Qualifiers.Add(“read”, $true)
$newClass.Properties.Add(“SerialNumber”, [System.Management.CimType]::String, $false)
$newClass.Properties[“SerialNumber”].Qualifiers.Add(“read”, $true)
$newClass.Put()
}

# Check whether we already created our custom WMI class on this PC, if not, create it
[void](Get-WmiObject MonitorDetails -ErrorAction SilentlyContinue -ErrorVariable wmiclasserror)

# If the wmiClassError is returned then assume that the WMI class does not exist yet and try to create a WMI class to hold the Monitor info
# If creating the WMI class fails, exit with error code 1
if ($wmiclasserror) {
try { Create-Wmi-Class }
catch {
“Could not create WMI class”
Exit 1
}
}

# Iterate through the monitors in Device Manager
$monitorInfo = @() #Empty array
Get-WmiObject Win32_PnPEntity -Filter “Service=’monitor'” | foreach-object { $k=0 } {
$mi = @{}
$mi.Caption = $_.Caption
$mi.DeviceID = $_.DeviceID

# Then look up its data in the registry
$path = “HKLM:\SYSTEM\CurrentControlSet\Enum\” + $_.DeviceID + “\Device Parameters”
$edid = (Get-ItemProperty $path EDID -ErrorAction SilentlyContinue).EDID

# Some monitors, especially those attached to VMs either don’t have a Device Parameters key or an EDID value. Skip these
if ($edid -ne $null) {

# Collect the information from the EDID array in a hashtable
$mi.Manufacturer += [char](64 + [Int32]($edid[8] / 4))
$mi.Manufacturer += [char](64 + [Int32]($edid[8] % 4) * 8 + [Int32]($edid[9] / 32))
$mi.Manufacturer += [char](64 + [Int32]($edid[9] % 32))
$mi.ManufacturingWeek = $edid[16]
$mi.ManufacturingYear = $edid[17] + 1990
$mi.HorizontalSize = $edid[21]
$mi.VerticalSize = $edid[22]
$mi.DiagonalSize = [Math]::Round([Math]::Sqrt($mi.HorizontalSize*$mi.HorizontalSize + $mi.VerticalSize*$mi.VerticalSize) / 2.54)

# Walk through the four descriptor fields
for ($i = 54; $i -lt 109; $i += 18) {

# Check if one of the descriptor fields is either the serial number or the monitor name
# If yes, extract the 13 bytes that contain the text and append them into a string
if ((Get-LittleEndianInt $edid $i) -eq 0xff) {
for ($j = $i+5; $edid[$j] -ne 10 -and $j -lt $i+18; $j++) { $mi.SerialNumber += [char]$edid[$j] }
}
if ((Get-LittleEndianInt $edid $i) -eq 0xfc) {
for ($j = $i+5; $edid[$j] -ne 10 -and $j -lt $i+18; $j++) { $mi.Name += [char]$edid[$j] }
}
}

# If the horizontal size of this monitor is zero, it’s a purely virtual one (i.e. RDP only) and shouldn’t be stored
if ($mi.HorizontalSize -ne 0) {
$monitorInfo += $mi
}
}
}

#$monitorInfo
# Clear WMI
Get-WmiObject MonitorDetails | Remove-WmiObject

# And store the data in WMI
$monitorInfo | % { $i=0 } {
[void](Set-WmiInstance -Path \\.\root\cimv2:MonitorDetails -Arguments @{DeviceID=$_.DeviceID; ManufacturingYear=$_.ManufacturingYear; `
ManufacturingWeek=$_.ManufacturingWeek; DiagonalSize=$_.DiagonalSize; Manufacturer=$_.Manufacturer; Name=$_.Name; SerialNumber=$_.SerialNumber})

#”Set-WmiInstance -Path \\.\root\cimv2:MonitorDetails -Arguments @{{DeviceID=`”{0}`”; ManufacturingYear={1}; ManufacturingWeek={2}; DiagonalSize={3}; Manufacturer=`”{4}`”; Name=`”{5}`”; SerialNumber=`”{6}`”}}” -f $_.DeviceID, $_.ManufacturingYear, $_.ManufacturingWeek, $_.DiagonalSize, $_.Manufacturer, $_.Name, $_.SerialNumber
$i++
}

The script needs to run on each PC on a regular interval to keep the data up-to-date. This ensures that if a monitor gets added or removed from a PC then the information is updated on a regular basis. Save the PowerShell script to a location that can be used by SCCM as the source location of a package. This location will be referenced as the Source Location for the remainder of this procedure.

Open the System Center 2012 Configuration Manager console  clip_image001
Select the Software Library workspace  clip_image002
Expand the Application Management node and select the Packages node  clip_image003
Select the subfolder where the package will be created, right click and select Create Package from the drop down list  clip_image004
Enter the following information:

Name: Monitor Details Gather

Description: Extract the monitor EDID information from the client and store the data in WMI ready for collection by SCCM

Version: 1.0

Click the checkbox labelled The package contains source files and click Browse

 clip_image005
Enter the UNC path to the Source Location folder created earlier in this procedure.

Click OK

Once back on the package screen, click Next

 clip_image006
Select Standard Program and click Next  clip_image007
Enter the following information:

Name: Get Monitor Details

Command Line: get-monitor-details.ps1

Run: Normal

Programs can run: Whether or not a user is logged on

Click Next

 clip_image008
Leave all settings as default and click Next  clip_image009

Confirm the settings and click Next to create the package

When the package creation is completed, click Close

Within the console, right click on the package and select Distribute Content from the drop down list  clip_imageb001
Click Next  clip_imageb002
Click Add and select Distribution Point from the drop down list  clip_imageb003
Select the distribution points that require the content and click OK  clip_imageb004
Once all distribution points have been added, click Next  clip_imageb005
Confirm all the settings and click Next  clip_imageb006
When the Distribute Content Wizard is completed, click Close  clip_imageb007

Once the package is created we need to deploy it out on to run on a regular schedule on clients. The script does need to be run often as the monitors will move from PC to PC over time. How frequently is up to each organization and what they are trying to achieve.

To setup a deployment:

Within the console, right click on the package and select Deploy from the drop down list  clip_imagec001
On the label collection label click the Browse button  clip_imagec002
Select the collection that the script will be deployed to and click OK.

On the previous screen, click Next

 clip_imagec003
Confirm that the content has been distributed to a distribution point and click Next  clip_imagec004
Select Required as the installation type and click Next  clip_imagec005
On the schedule wizard screen, click New  clip_imagec006
Click the Schedule button  clip_imagec007
Select the start time for when the script will run on the workstations.

Select a custom interval and set this schedule to recur every 1 days.

Click OK.

 clip_imagec008
Click OK  clip_imagec009
Click Next  clip_imagec010
Leave all settings as default and click Next  clip_imagec011
Leave all settings as default and click Next  clip_imagec012
Confirm the settings and click Next to create the package  clip_imagec013
When the Deploy software wizard is completed, click Close  clip_imagec014

Step 2: Retrieve the WMI Data via ConfigMgr

Now that we have the data stored in the WMI we need to get the ConfigMgr client to return the data next time it does a Hardware Inventory of the clients.

To ensure it is possible to read the correct fields within ConfigMgr the WMI class needs to exist on at least one PC that you have access to.

Select a PC to run the script on and execute the PS1 file.

This PC will be used later to query the class that will allow System Center 2012 Configuration Manager to collect inventory from all other workstations.

Select the Administration workspace  clip_imaged001
Select the Client Settings node  clip_imaged002
Select the Default Client Settings item,
OR
a client settings item that affects all workstation clientsRight click and select Properties
 clip_imaged003
Select Hardware Inventory from the settings list  clip_imaged004
Click Set Classes  clip_imaged005
Click Add  clip_imaged006
Click Connect  clip_imaged007
Enter the Computer name that the script was run on earlier in this procedure and click Connect  clip_imaged008
Select the MonitorDetails class from the list and click OK.

If the MonitorDetails class is not there, then the script has not run successfully on the computer you are connecting to. Make sure you test the PowerShell script and repeat if necessary.

Once the class is selected, click OK on the remaining open windows

 clip_imaged009

This process tells the client to retrieve the WMI class that we just created an populated using our PowerShell script. Once this is set, it will not need to be revisited unless the client settings change or are recreated for any reason.

And there we have it.

The PowerShell script will go out and run against clients updating the WMI and as these clients report in their Hardware inventory the monitor details will appear in the resource explorer like any other hardware detail.

For many, this may enough as they will be able to report on the ConfigMgr database and get the results they are after. Others want a more thorough view of Asset Management and may want to pull this information in to their Asset management solution to show these relationships.

In my next blog post, I will go through how to use the Cireson Asset Management Solution to pull in this data, create or update a Hardware Asset item for each monitor and finally how to associate it with the computer it is plugged in to.

An ITIL Change Management Checklist (Best Practices to Avoid Common Pitfalls)

In the last week I’ve been doing a couple of presentations on Change Management and where to start for businesses. This post will be talking about the IT Service Management life-cycle and most importantly delivering services to our end users, or customers, that are successful and have little to no negative impact on business continuity during its deployment and also reduce business risk wherever possible.

This post will be focusing on Change Management and where to start with it, what are best practices and how do we make it easier on ourselves.

To kick off, I think it is important that we have a clear idea of what a change is and why change management is important.

“A change is defined by ITIL as the Addition, Modification or Removal of anything that could have an effect on IT services.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)

Now I would make one slight modification to this statement and replace IT Services with Business Services.

Why should we restrict the amazing work we are doing to just IT?

ITIL also tells us that “Change Management provides value by promptly evaluating and delivering the changes required by the business, and by minimizing the disruption and rework caused by failed changes.” (ITIL v3 Foundation Handbook. TSO, 2009, pp. 92-93)

Some of the key words that we should all keep a focus on are “Promptly” evaluating, “Minimizing” disruption to end users and also “Minimizing Rework” that is required when a change fails.

There are some common challenges that people have when looking at Change Management for the first time, or looking at improving their Change Management processes. Here are three of the main challenges that I see when discussing Change Management with clients:

Stuck with someone else’s mess

Many people fail before they even start because they are buried in a mess created before they arrived. Either because of a failed attempt to get change management implemented or just a complicated system that has always existed.

And as we know many systems are just maintained because “That’s the way it’s always been done”.

Getting buy in from the entire business is important. Having the business behind a push for better change management will enable you to wipe the slate clean and build something less complex and more targeted to the business.

Not sure where to start

Change management can be a big beast and can get people all bent out of shape knowing they will have to follow some sort of formalized process.

However, as we will see, there is no need for Change management to be as complex as people think it will be.

It’s Too Complex

Yes, this would have to be my personal number one bug bear with some change management processes.

But as we mentioned earlier ITIL tells us the change management should “Provide value by promptly evaluating and delivering the changes…..”

So if a change management process is taking to long or is an arduous process then we know we have it wrong.

Too many fingers in the pie

This is an oversimplification of this point.

What I’m trying to explain here is that many groups or sub groups that we work with do have set procedures that they use and are at varying levels of maturity and quite often these independent groups think they have it right and want to take care of it all themselves.

However, these processes are often independent of each other and can get in each other’s way or “rework” the same information or data several times.

Imagine if you will that ever car manufacturer had their own set of road rules. Individually these rules may work and may be a perfect procedure for that car manufacturer. That’s all well and good until we all start sharing the same road.

Then, we have chaos.

Even though everyone is following a set and tested procedure if we don’t take in to consideration all the other systems that we have within our business then we see conflicting issues and changes that were doomed to fail.

Specifically in IT, as our systems become ever more complex these issues occur on an all to frequent basis.

I’m sure everyone has an example of where a minor change to one system had a catastrophic outcome for some unrelated system that no one new about or had not considered.

Good change management can reduce the amount of time spent on unplanned work but it has to be effective.

Bad change management will just add an administration layer to the firefighting we always do.

This is both a waste of time and does not reduce the amount of unplanned work we have.

From what we have talked about so far there are some basic rules we can stick to that will help guide us to a good Change Management process.

Promptly is the key

If a process takes too long then no one is going to want to follow it. High risk issues are always going to take longer but there is no need to drag our feet where we don’t need to.

Low risk issues should be able to be speedily processed and maybe even automatically approved.

Which leads us to our next point,

Fit for Purpose

There is no need to bother your CAB with basic routine. If the CAB can clearly define what they require for a basic low risk change then make sure your process hits that and move on.

CAB have bigger fish to fry and more risk to deal with.

So why not have a simple process for Low risk changes. One Change Manager to review then do the change. SIMPLE!

How do we make sure that we capture these key points?

Standardization

Create templates where possible. Inject data we already know so people don’t have to guess at values or (like I am sure we have all seen) people just making up values.

It is more important to be able to get a consistent and complete result than it is to get the perfect result. Consistency allows us to report and see where we are doing well, and where we can improve.

More processes to make all this happen is NOT the solution. Often less is more when it comes to these processes.

We can all think of a change that we SHOULD do but never quite get around to it. How about rebooting a server? Depending on the server this could be low risk, minimal impact, not worth going to CAB over….  But should it be a change?

Remember a change is defined as “…the Addition, Modification or Removal of anything that could have an effect on IT services.”

Well why not have a change process that accepts that a low risk server can be rebooted without CAB approval, just so long as it is recorded?

Why not automate it?!

Of course none of this is any good if we don’t know the risk.

More specifically, Correct Risk.

So what is the best way to assign risk to our IT Services?

This is a big topic that usually takes many committees and sub committees, revisions, arguments, another committee and more arguments.

There is a much simpler way to assign risk to an IT service and you most probably have already done it. D.R.

We classify Disaster Recovery in most organisations. We have had the argument and we have spent the money to make sure those “Business Critical” systems have good quality D.R.

If you are like most organizations I’ve worked for you will have gone through the process of “What do we cover with DR?

And we start by including EVERYTHING.

We then get the quote back from a vendor on what that DR solution would cost and we quickly write off half a dozen services without even thinking.

And again and again we go until we have a DR solution that covers our Business Critical systems.

Guess what? They are High risk.

Some systems we have internal HA solutions for. Maybe SCSM with 2 Management servers.

Not Critical….  We could live off paper and phone calls for a few hours or even days without it….   Let’s say medium risk.

Then we have everything else. Low risk.

Simple. Why over complicate it?

So all the theory in the world is all well and good but what are some real world examples? Well I’m glad you asked.

I wanted to show you a range of items that had a common theme but ranged in complexity and risk. That way I can demonstrate to you the way it can be done in the real world with a real scenario.

What better scenario than our own products.

However, there is no reason that these examples could not be easily translated to ANY system that you may have in your organisation.

Our first example is the Cireson Essentials Apps. Like Advanced Send E-mail, View Builder, and Tier Watcher.

These can be classified as “Low Risk” because only IT Analysts use them and IT Analysts can do their job without them, even though it would take more effort to do the job, they could still work.

Second is the Self Service portal. This effects more than just the IT Analysts and the impact on end users being able to view and update their Work Items and unable to use Self Service. But, all of this can still be done via phone or e-mail, although this will take longer and not be as simple for end users.

Finally, asset management is a high risk change that would happen to our system. Asset Management affects a wide range of systems and impacts SR’s as well as support details that analysts use.

In addition, the upgrade of AM is not a simple task. During the upgrade there are reports, cubes, management packs, DLL’s and PC client installs that are required.

So let’s take a look at what this looks like in the real world.

So when creating a change management process surely there are some simple steps we can follow to get the ball rolling.

Here is what I like to tell people are the 4 key pieces of a successful change management practice.

Less Process

Keeping the process as simple and as prompt as possible. This can be started by creating basic process templates for Low, Medium and high risk changes. There is always going to be exceptions that we have to add a little more process for but wherever possible stick to the standard three basic templates.

TEST!

The number 1 reason for failure of changes that I’ve ever been involved in is testing. There is nothing like the old “Worked fine in my lab…   “ line.

The amount of rework or “Unplanned” work that stems from an untested change is huge and even just a little testing can catch big issues

Get the right people involved

We are not always experts in what a system is, what is does or how it should work.

How many times has your testing for an application package been to install it and if it installs without an error, it must be good?

What if when an end user logs on the whole thing crashes?

So even getting end users involved in your testing of minor changes can be a huge benefit.

And finally….

Review

So many places I see never have a formal review process.

These are critical for making sure the processes don’t stray from a standardized approach and that we know that all the obvious things are right and the whole thing is not going to blow up when we hit go.

Just reviewing the failures to find what went wrong is not enough.

It is also important that the changes that went right and fed back in to future decisions to avoid the ones that go wrong. This feedback should find it’s way back in to change templates AND base documentation for the systems so we keep all our processes up-to-date.

These don’t have to be long but these reviews can identify ways to make the process simpler, where a template can be created or where the CAB can be cut out of the picture entirely!

One fantastic question I had recently was “How many changes should we temple?
This is a great question as many people think that templating everything they do is a waste of time. This is not the case.
If you have a change that you do on a recurring basis (not a once off) even if it only occurs once every six months or year, (In fact I’d argue especially if it is only once or twice a year) it is worth templating for two main reasons:

  • Does anyone remember the correct process for the change?
    Often a single person is responsible for a change and all the knowledge can be locked up in their head. By templating processes this way, we can ensure that the knowledge is available to everyone so if the original person is no longer available the change can still be a success.
  • Was the process successful last time we ran it and if not, what went wrong so we don’t do it again?
    If you are only doing a change once or twice a year a template is a great way of making sure that lessons learnt are never lost and mistakes are worked out of the process.

A standard approach might include a set of standard activities that are carried across all risk levels, but we just keep adding to the process as we move up the risk profile. A basic template might look something like this:

CR Standardization

The above example is just an example. It is by no way prescriptive guidance that you should follow religiously, but more of a starting point. There is always going to be those changes that require more steps as the change is more complex. What is important is the basics are followed and a standard approach is followed to encompass the key points outlined in this article.

So to sum this all up in one paragraph:

  • Prompt and Simple Process. Make it quick and simple
  • Standardize ALL changes to a simple set of rules and create templates
  • Make sure your changes are fit for purpose. Only bother the CAB when you need to and have the right people involved
  • Simple risk calculation (use disaster recovery plans if you don’t know where to start)
  • TEST, TEST and RETEST!
  • Review and document your changes to improve what you do

Runbook is in an Invalid State

PROBLEM


A common issue I run in to a lot with SCSM automation is the Following error message:

The Runbook associated with this Runbook activity template <Name of template>, is in an invalid state. Select another Runbook or ensure that the Orchestrator connector is properly configured

The Runbook associated with this Runbook activity template <Name of template>, is in an invalid state. Select another Runbook or ensure that the Orchestrator connector is properly configured

Error message in the SCSM console

CAUSE


This is caused by the Runbook being in an invalid state within SCSM, not within Orchestrator.

To see what I mean, within SCSM, Navigate to the Library workspace and select the Runbooks node.

Invalid Runbook

Invalid Runbook

When a Runbook within SCSM is in an invalid state, it is usually because the input Properties for the Initialize activity within the Runbook itself has been changed since the first sync of the Orchestrator connector and SCSM does not know what to do with the new properties. (or the removal of the old ones)

SOLUTION


The solution is fairly straight forward.

Within the SCSM Console, select the Runbook that has a status of “Invalid” and select Delete.
This will delete it from the SCSM Console and not Orchestrator.

Then re-run the Orchestrator Connector:

  1. Select the Administration workspace
  2. Select the Connectors node
  3. Selecting the Orchestrator connector you need to re-run
  4. Click Synchronize Now in the tasks pane

Re-run the Orchestrator Connector

Re-run the Orchestrator Connector

Once the connector has finished it should all be back to normal.

Orchestrator Runbook Running….. but not.

I’ve had several customers come to me over the past few years complaining about one or more Runbooks showing that they are in a running state but they don’t show that there is any activity. Neither in the Log within the Runbook designer, nor in the console.

Problem


As you can see here the Runbook has been invoked and is in “Play” but there is no log data showing what step it is currently processing.

runbook

CAUSE


The thing that the Runbooks have in common are they are triggered from Service Requests within SCSM, usually from a Request Offering from the self-service portal.
On close inspection, it turns out that in passing properties to a Runbook the initialize data activity does not “Cleanse” the data and therefore any reserved characters are not protected by “” when used as input to the Runbook. So when a value gets passed that contains a character like &, > or <, the Orchestrator console try’s to interpret it as a command.

Solution


Don’t use &, < or > in any value that you pass to Orchestrator.

Within SCSM ensure any enumeration list or simple list that the end users may select from do not contain the &, < or > characters.

What gets harder is if the end user types this detail in to a free form text field. This you can prevent with a little >NET Regular Expression trickery.

On any text field that the end user will have free for them to enter text as they see fit that will be used to pass to a Runbook, use the following .NET Regular Expression filter to filter out any special characters:

^[a-zA-Z0-9~!@#$%*()-=+;:,.? ]+$

 

Service Manager – Incident “Stop the clock”

 

A close friend and fellow SCSM nerd solved a commonly asked question about pausing the SLO clock for Incidents.
This blog post covers the solution he came up with.

Thanks Shayne for sharing your Stop the Clock solution

System Center User Community Newcastle

Stop the clock – Pause SLA

Hey Everyone,

I have been asked a few times if I can post a blog about my “Stop the clock” solution I put in at my previous job. So, here it is!!

There are a few prerequisites.

You need to create the Incident Status you want to be included for status changes, which will trigger the “Stop the clock” workflow. Once these are created, follow the steps below.

1: Create Custom MP.

2: Create Notification Subscription including queues (Incident P1,P2,P3,”Paused”) and what will kickoff workflow (status change from x to y). Important: Create in Custom MP.

3: Create SLO’s in Custom MP – Resolution Time P1, Resulution Time P2, Resolution Time P3, Response Time P1, Response Time P2, Response Time P3.

4: Export Custom MP.

5: Open in XML Editor.

6: Find line – “NotificationSubscription_’GUID ID of…

View original post 194 more words

Neat Trick to Get SCSM Service Request Attachments

Have you ever wished that you could get the attachments out of an SCSM work item without having to go into the console and dig them out?

I know I did.

With the almighty power of Google, I found a few PowerShell scripts that do this, and I thought I would share how I have used some of that code. In this example I am using a SCORCH Runbook to call this PS script. This way we can leverage the Runbook in other automation activities.

First, an example or two of how this can be used.

Let’s say we have an SCSM Request Offering published via the portal. In this RO, the user can attach a document (let’s call it a work order). Normally, the analysts processing this request would have to dig it out of the SCSM work item manually. By utilizing this Runbook , we can automate storing the attachment in a network folder. Perhaps this folder is monitored by your document management system, or maybe it’s just a central repository for these documents.

Another way you might use this is to email that attachment to another party via Runbook automation. The user submits the attachment, the job gets logged, the Runbook kicks off, the attachment is stored in a temporary folder and then attached to an email sent from SCORCH.

So how do we do this?

An overview of the Runbook (pretty simple hey)

rb

The Initialize Data activity has some fields that we might like to pass along to the following activities. You can plan this ahead of time to best suit the way you wish to use it. You will need one for the SC Object GUID of the Service Request. I’ve also added a field for Destination and the friendly ID of the work item (e.g.. SR123456). The script creates a folder with that ID to store the files in. Your Runbook will need to know this ID to move that folder after the script has run. You could pass the variable out of the script to the Runbook , but if you are already parsing the SR’s SC Obj GUID, it’s easy enough to also pass through the SR ID.

The .net activity is a PowerShell script, and it is going to invoke a PSSession on the SCSM server. You could also use the Execute PowerShell Script activity with the appropriate details of your SCSM Server.

Here is the script.

$Session=New-PSSession -ComputerName “scsmserver”
Invoke-Command -Session $session -ScriptBlock
{
Import-Module ‘C:\Program Files\Microsoft System Center 2012\Service Manager\Powershell\System.Center.Service.Manager.psd1’
$SMServer=”scsmserver”
$SR=Get-SCClassInstance -ComputerName $SMServer –Id ‘{SC Object GUID from “Initialize Data”}’
$targetclass=Get-SCSMRelationship -ComputerName $SMServer -DisplayName “Has File Attachment” | where {$_.Source -eq (get-scsmclass -ComputerName $SMServer -Name System.WorkItem)}
$files=$SR.GetRelatedObjectsWhereSource($targetclass)
$ArchiveRootPath=”C:\Temp\OrchestratorRemote\”
#For each file, archive to entity folder
$filelist=@()
if($files -ne $Null)
{
#Create archive folder
$nArchivePath=$ArchiveRootPath + “” + $SR.Id
New-Item -Path ($nArchivePath) -ItemType “directory” -Force|Out-Null

$files|%{
Try
{
$filelist+=”$nArchivePath$_”
$fileId=$_.EnterpriseManagementObject.Id
$fileobject=get-scsmclassinstance -ComputerName $SMServer -Id $fileId
$fs = [IO.File]::OpenWrite(($nArchivePath + “\” + $_.EnterpriseManagementObject.DisplayName))
$memoryStream = New-Object IO.MemoryStream
$buffer = New-Object byte[] 8192
[int]$bytesRead|Out-Null
while (($bytesRead = $fileobject.Content.Read($buffer, 0, $buffer.Length)) -gt 0)
{
$memoryStream.Write($buffer, 0, $bytesRead)
}
$memoryStream.WriteTo($fs)
}
Finally
{
$fs.Close()
$memoryStream.Close()
}
}
}
$file1=$filelist[0]
$file2=$filelist[1]
$file3=$filelist[2]
}

Remove-PSSession $Session

What that has done is copied any attached files from our Service Request to C:\Temp\OrchestratorRemote\(SR ID) on the SCSM Server. You can alter that path to whatever suits.

The last activity in the Runbook is going to move that folder to the path specified in Initialize Data. Add a “move folder” activity, with the source as \\scsmserver\c$\temp\orchestrator and the destination path as {Destination from “Initialize Data”}

Now, if you ever need to get attachments out of work items – you can just invoke this Runbook and off it goes.

Making Better Use of Service Request User Input in SCSM – Part 3

At a recent meeting of the Adelaide System Center User Community and also at Cireson Innovate 2015 I did a presented on Self-Service Automation Deep Dive (https://vimeo.com/143957653) and showed several automation techniques that I have used over the years. In this series of blog posts I will explain these techniques, why they are useful and how to go about automating them in your environment. (And maybe even share a Runbook or two.)

In Part 1 we looked at taking the Affected User’s name and placing it in the title of the service request to make it easier to find when looking at a queue of work.
In Part 2 we looked at taking some basic user input text fields and placing them in the Description field of the service request.
In Part 3 we will look at taking a multi select query field and enter each of the results in the description field so they are nicely formatted.

What’s Wrong With User Input?

When an end user enters data in to a service request the results are recorded in properties of the associated SR or any activities that are related to that SR. To try and “Help” the analyst these values are then also displayed in the User Input section on the SR form.

With text and drop down lists the data is shown in a format that is easy to read.

image_thumb3_thumb

Query results however are not as easy.

image_thumb1_thumb

Not only is the select value shown but also the GUID of the item. If the query is set to allow multiple selections, like the one above, there can be multiple values all mixed together.

This is far from simple and some analysts, especially second and third level support, may never actually read this data or understand what it all means.

How Could User Input  Be Used Better?

There are a couple of ways I will cover that I think are the most useful ways to use this User Input data on a day-to-day basis. This data would be much more useful if it was somehow added in a nicely formatted way in the SR description or even key pieces of data in the Title.

In part one we looked at the title only and in Part 2 we will cover the Description field.

Multiple Values in the Description

In this example SR a user can select multiple values from a query of Business Service CI’s that are distribution lists.
Instead of having to use the User Input field to determine what DL to add the user to it would be much easier for the analyst to have this information nicely formatted in the description field for easy retrieval and action.

Like so:

image_thumb6_thumb

Once the user has selected multiple Configuration Items (CI’s) they are then associated with, or related to, the Work Item we are dealing with. (In this case, and in most, the work item is a Service Request)

A Runbook that we might use to update the SR description text with a list of all the items selected by the user (in this case distribution lists) might look a little like this:

Update Description Runbook

Looks fairly simple enough, lets quickly run through it:

  1. Get the Service Request item back from SCSM
  2. Get any Service Request and User relationships
  3. Filter the user CI’s that have been returned to only pass the Affected User
  4. Get the Affected User’s AD User CI from SCSM and update the title of the SR
  5. Get any AD Groups that have a relationship to the SR
  6. Get the Related AD Group and update the Description of the SR with the Distribution list

Seems simple enough. Each AD User group that represents a Distribution List will be found and the value written to the SR Description like so:

Update Description Runbook2

The issue is, that with each AD Group that is found that “Branch” (for lack of a better word) of the Runbook will run once. So if the user selects 3 AD Groups, the branch (labelled #6 in the previous image) will run 3 times.

Instead of appending the AD Group Display name to the description each time, it will over write the description with a new value each time, so the end result will be the last of the 3 groups being listed but nothing else.

To get around this and list each one, we need to do the following:

  1. Read the SR Description as it is right now
  2. Read the Display Name of the AD Group
  3. Write the Original SR Description back to the SR and append the AD Group we just found.

So how do we do this?

To do this we remove the Get AD Group and Update SR activities from our existing Runbook, and replace them with a single Invoke Runbook activity. Like so:

Update Description Runbook3

We then have to create a new Runbook that this will call each time.

Update Description Runbook4

As we described above this Runbook will:

  1. Read the SR Description as it is right now
  2. Read the Display Name of the AD Group
  3. Write the Original SR Description back to the SR and append the AD Group we just found.

The Update SR Description activity just has the description from the Get AD Group activity, plus, the Display Name from the Get SR Object activity. Like so:

Update Description Runbook5

So long as the SR Description field has all the precursor text we want in the template then that will contain the heading for us.

Such as: “Please add this user to the following AD Groups:”

The final output should look something like:

Please add this user to the following AD Groups:
– Social Club
– All Users
– Accounting
– Asia Pacific

Like always, I hope this was helpful.