Thank you for your feedback.
Form temporarily unavailable. Please try again or contact to submit your comments.

CMDB Health troubleshooting

Log in to subscribe to topics and get notified when content changes.

CMDB Health troubleshooting

Use the following information to track and troubleshoot CMDB Health processes.


By default, only error messages are logged to the syslog table, with the source name CmdbHealth. To enable logging of 'info' and 'warning' messages (which are typically logged at the start and end of each processing cycle), you need to update the system property glide.cmdb.logger.use_syslog.CMDBHealth. For information about using this property, see CMDB health system properties.

Processing status

If scheduled jobs are enabled, but data is not displaying on the CMDB dashboard, you can check the processing status in the CMDB Health Metric Status [cmdb_health_metric_status] table. Depending on the status of the inactive metric, decide how to proceed.

Initially, the state of all metrics is 'In Progress'.

Possible final states of a sub-metric:
All classes are processed and the number of failures is under the maximum failures threshold.
Max Failures
The number of failures for this metric reached the maximum failures threshold. Processing has been aborted and will start over in the next run.
Daily Time Out Pause
The processor reached the processing time limit. Processing is paused and will resume in the next run.
At the end of a processing cycle, the final state of a major metric depends on the final state of its sub-metrics. Possible final state of a major metric:
All sub-metrics are in Complete state and score calculation is complete.
Score is not calculated because one of the sub-metrics reached its maximum failure thresholds.
Daily Time Out Pause
Timed out because one of the sub-metrics has reached its processing time limit.

Processing time

If processing of a metric times out, you can find out which class takes too long to process. This will help you find out if any validation rules are weak.

The progress of each metric is tracked in the CMDB Health Processor Status table [cmdb_health_processor_status]. Status for classes that have been processed for a metric is Complete, and for classes that are yet to be processed is Draft. By looking at the update time for each class you can calculate the length of processing time for each class.

Fixing orphan records due to broken hierarchy

Orphan rules might detect an orphan CI, which you will not be able to access and delete. Or, there might be a mismatch between the list view that displays the orphan records, and the total number of records. This is due to records being deleted in the database from only one table in the CMDB hierarchy.

These CI records are not accessible via GlideRecord and need to be deleted directly from the database. Therefore, in this case, in order to delete an orphan CI from the database you will need to get help from customer support.

Orphan test results provide the details of where exactly the hierarchy is broken. For example, the message "This cmdb_ci_linux_server CI [91054fc24f22520053d6e1d18110c713] is missing record in cmdb_ci_computer table" means that a record of that sys_id needs to be deleted from the cmdb, cmdb_ci, cmdb_ci_hardware, cmdb_ci_server, and the cmdb_ci_linux_server tables (the Computer class is between the Hardware and the Server classes in the hierarchy.)