Link or customize a benchmark KPI
- UpdatedJan 30, 2025
- 8 minutes to read
- Yokohama
- Benchmarks
You can customize KPI conditions to fit the needs of your organization better.
Before you begin
The Benchmarks admin role does not provide application-specific roles. Therefore, although a Benchmarks admin can access a KPI through the Benchmarks application, changes to KPI conditions require the role specific to the KPI application.
For example, a Benchmarks admin cannot modify conditions for the knowledge base KPI (Knowledge Use [kb_use] table), SLA KPI (Task SLA [task_sla] table), ITOM KPIs (CMDB Health Scorecard [cmdb_health_scorecard] table), Security Operations KPIs (Vulnerable Item [sn_vul_vulnerable_item], or Security Incident [sn_si_incident] tables) from within Benchmarks without specific access granted to those tables (knowledge_admin, sla_admin, asset, or sn_si.special_access roles).
Role required: sn_bm_client.benchmark_admin
About this task
Customizing KPI conditions is useful to adjust the criteria to more accurately represent the data that your company is interested in.
- Data for the previous condition (up until the date the condition was changed)
- Data for the new condition from that date forward
For further analysis, you can link a Performance Analytics indicator that you are already using to the corresponding Benchmarks indicator to see breakdowns, in addition to individual scores, when drilling down on KPI data.
Procedure
-
Navigate to All > Benchmarks > Setup and click a KPI to access the KPI conditions.
- Change the conditions, as appropriate.
-
To link a Performance Analytics
indicator to a Benchmarks indicator, follow the procedure in Link an automated
indicator to a benchmark.
Table 1. Performance Analytics KPI to Benchmarks KPI link Benchmarks KPI Formula Performance Analytics KPI Number of active users -- Benchmark: Number of active users Total time to close changes -- Benchmark: Total time to close changes % of failed changes [Number of unsuccessful changes during the month] /
[Total number of all changes closed during the same month]
Benchmark: % of failed changes Number of unsuccessful changes -- Benchmark: Number of unsuccessful changes Number of changes closed -- Benchmark: Number of changes closed % of emergency changes [Number of emergency changes closed during the month] /
[Total number of all changes closed during the same month]
Benchmark: % of emergency changes Number of emergency changes closed -- Benchmark: Number of emergency changes closed Average time to close a change [[Benchmark: Total time to close changes]] /
[[Benchmark: Number of changes closed]]
Benchmark: Average time to close a change % of stale CIs Monthly average. Benchmark: % of stale cis % of duplicate CIs Monthly average. Benchmark: % of duplicate cis % of non-compliant CIs Monthly average. Benchmark: % of non-compliant cis Average time to resolve a high priority incident [[Benchmark: Total time to resolve high priority incidents]]/
[[Benchmark: Number of high priority incidents resolved]]
Benchmark: Average time to resolve a high priority incident % of incidents resolved within SLA ([[Benchmark: Number of resolved incidents with SLAs]] - [[Benchmark: Number of resolved incidents with breached SLAs]]) /
[[Benchmark: Number of resolved incidents with SLAs]] * 100
Benchmark: % of incidents resolved within SLA Total number of incidents resolved by problem -- Benchmark: Total number of incidents resolved by problem Number of incidents closed that were re-opened -- Benchmark: Number of incidents closed that were re-opened Number of high priority incidents -- Benchmark: Number of high priority incidents resolved Average time to resolve an incident [[Benchmark: Total time to resolve an incident]] /
[[Benchmark: Number of incidents resolved]]
Benchmark: Average time to resolve an incident Number of resolved incidents with SLAs -- Benchmark: Number of resolved incidents with SLAs Number of incidents created per user [[Benchmark: Number of incidents created / By month SUM]]/
[[Benchmark: Number of active users / By month AVG]]
Benchmark: Number of incidents created per user Total time to resolve high priority incidents -- Benchmark: Total time to resolve high priority incidents % of high priority incidents [[Benchmark: Number of high priority incidents resolved]] /
[[Benchmark: Number of incidents resolved]] * 100
Benchmark: % of high priority incidents Number of incidents created -- Benchmark: Number of incidents created Number of incidents resolved with attached KB articles -- Benchmark: Number of incidents resolved with attached KB articles Number of resolved incidents with breached SLAs -- Benchmark: Number of resolved incidents with breached SLAs Total number of incidents -- Benchmark: Number of incidents resolved Number of incidents resolved on first assignment -- Benchmark: Number of incidents resolved on first assignment % of incidents resolved on first assignment [[Benchmark: Total time to resolve an incident]] /
[[Benchmark: Number of incidents resolved]]
Benchmark: % of incidents resolved on first assignment Total time to resolve incidents -- Benchmark: Total time to resolve an incident % of reopened incidents [[Benchmark: Number of incidents closed that were re-opened]] /
[[Benchmark: Number of incidents closed ]] * 100
Benchmark: % of reopened incidents Number of incidents closed -- Benchmark: Number of incidents closed Number of knowledge article views -- Benchmark: Number of knowledge article views Number of knowledge base views per user [[Benchmark: Number of knowledge article views / By month SUM]]/
[[Benchmark: Number of active users / By month AVG]]
Benchmark: Number of knowledge base views per user % of incidents resolved using KB articles [[Benchmark: Number of incidents resolved with attached KB articles]] /
[[Benchmark: Number of incidents resolved]] * 100
Benchmark: % of incidents resolved using KB articles Number of survey instances -- Benchmark: Number of survey instances Number of active ITIL users -- Benchmark: Number of active ITIL users Normalized customer satisfaction -- Benchmark: Normalized customer satisfaction Average customer satisfaction [[Benchmark: Normalized customer satisfaction]] /
[[Benchmark: Number of survey instances]]
Benchmark: Average customer satisfaction Number of requesters per fulfiller ([[Benchmark: Number of active users]] - [[Benchmark: Number of active ITIL users]]) /
[[Benchmark: Number of active ITIL users]]
Benchmark: Number of requesters per fulfiller % of high priority problems [[Benchmark: Number of high priority problems closed]] /
[[Benchmark: Number of problems closed]] * 100
Benchmark: % of high priority problems Total time to close problems -- Benchmark: Total time to close problems % of incidents resolved by problem [[Benchmark: Total number of incidents resolved by problem]] /
[[Benchmark: Number of incidents resolved]] * 100
Benchmark: % of incidents resolved by problem Number of high priority problems closed -- Benchmark: Number of high priority problems closed Number of problems closed -- Benchmark: Number of problems closed Average time to close a problem [[Benchmark: Total time to close problems]] /
[[Benchmark: Number of problems closed]]
Benchmark: Average time to close a problem % of critical and high priority security incidents [[Benchmark: Number of Critical and High Priority Security Incidents]] /
[[Benchmark: Number of Security Incidents]] * 100
Benchmark: % of critical and high priority security incidents Number of critical and high priority security incidents -- Benchmark: Number of Critical and High Priority Security Incidents Number of security incidents -- Benchmark: Number of Security Incidents Number of requests created -- Benchmark: Number of requests created Number of requests closed -- Benchmark: Number of requests closed Average time to fulfil a request [[Benchmark: Total time to fulfil requests]]/
[[Benchmark: Number of requests closed]]
Benchmark: Average time to fulfil a request Number of closed requests with SLAs -- Benchmark: Number of closed requests with SLAs Number of requests created per user [[Benchmark: Number of requests created / By month SUM]]/
[[Benchmark: Number of active users / By month AVG]]
Benchmark: Number of requests created per user Number of closed requests with breached SLAs -- Benchmark: Number of closed requests with breached SLAs Total time to fulfil requests -- Benchmark: Total time to fulfil requests % of closed requests with breached SLAs [[Benchmark: Number of closed requests with breached SLAs]] /
[[Benchmark: Number of closed requests with SLAs]] * 100
Benchmark: % of closed requests with breached SLAs Number of critical vulnerability items -- Benchmark: Number of critical vulnerability items Number of vulnerability items -- Benchmark: Number of vulnerability items Average vulnerability age [[Benchmark: Summed age of vulnerable items]] /
[[Benchmark: Number of vulnerability items]]
Benchmark: Average age of vulnerable items Average critical vulnerability age [[Benchmark: Summed age of critical vulnerable items]] /
[[Benchmark: Number of critical vulnerability items]]
Benchmark: Average age of critical vulnerable items Summed age of critical vulnerable items -- Benchmark: Summed age of critical vulnerable items Summed age of vulnerable items -- Benchmark: Summed age of vulnerable items Table 2. Performance Analytics KPI to Benchmarks VA KPI link Benchmarks KPI Formula Performance Analytics KPI Description Call Deflection (VA Triaged and Created) -- Benchmark: Call Deflection (VA Triaged and Created) Incidents created using Virtual Agent and tracked using the ITSM VA–Triage and Created deflection pattern. Call Deflection (VA Catalog Analytics) -- Benchmark: Call Deflection through VA (Catalog Analytics) Requests created using Virtual Agent and tracked every time Virtual Agent referred a catalog for submission. % Call Deflection ([[Benchmark: Call Deflection (VA Triaged and Created)]] + [[Benchmark: Call Deflection through VA (Catalog Analytics)]]) / ([[Benchmark: Number of requests created]] + [[Benchmark: Number of incidents created]]) * 100 Benchmark: % VA Deflection The number of times the Virtual Agent automatically resolved incidents or completed requests. IAR (Intercept & Resolved) -- Benchmark: VA IAR (Intercept & Resolved) Incidents that were resolved using ITSM VA-Intercept & Resolved deflection pattern. VA Self Resolved -- Benchmark: VA Self Resolved Every time the user was able to self-solve incident resolution or complete requests using ITSM VA-Self Resolving deflection pattern. Number of incidents resolved -- Benchmark: Number of incidents resolved The total number of incidents resolved. % of incident that were auto-resolved using VA ([[Benchmark: VA IAR (Intercept & Resolved)]] + [[Benchmark: VA Self Resolved]]) / ([[Benchmark: Number of incidents resolved]] + [[Benchmark: VA Self Resolved]]) * 100 Benchmark: % of incident that were auto-resolved using VA.
Percentage of incidents that were auto-resolved using Virtual Agent. Number of monthly users of VA -- Benchmark: Number of monthly users of VA Number of monthly users who are using Virtual Agent. % of users using Virtual Agent [[Benchmark: Number of monthly users of VA]] / [[Benchmark: Number of active users]] * 100 Benchmark: % of users using Virtual Agent Percentage of users in the organization using Virtual Agent as an engagement channel. Number of VA conversations handed over to an agent -- Benchmark: Number of VA conversation handed over to an agent Conversations that were not resolved by the Virtual Agent and had to be handed off to a live agent. Total conversations in VA -- Benchmark: Total conversations in VA All conversations that occurred in the Virtual Agent. % ofVA conversations handed over to live agent [[Benchmark: Number of VA conversation handed over to an agent]] / [[Benchmark: Total conversations in VA]] * 100
Benchmark: % of VA conversations handed over to live agent Number of times, in percentage, that the conversation was handed off to an agent. VA normalized Satisfaction score -- Benchmark: VA normalized Satisfaction score Overall, total normalized score of all the surveys conducted for Virtual Agent conversations. VA survey instances -- Benchmark: VA survey instances Total number of surveys that an agent responded to for assessing the satisfaction score. Virtual Agent CSAT score [[Benchmark: VA normalized Satisfaction score]]/[[Benchmark: VA survey instances]] Benchmark: VA CSAT score Average Virtual Agent satisfaction score.