Skip to main content

2.11 Monitoring and Validation

2.11.1
 
Institutions should demonstrate that their rating models are performing over time. All rating models should be monitored on a regular basis and independently validated according to all the principles articulated in the MMS. For that purpose, institutions should establish a list of metrics to estimate the performance and stability of models and compare these metrics against pre-defined limits.
 
2.11.2
 
The choice of metrics to validate rating models should be made carefully. These metrics should be sufficiently granular and capture performance through time. It is highly recommended to estimate the change in the model discriminatory power through time, for instance by considering a maximum acceptable drop in accuracy ratio.
 
2.11.3
 
In addition to the requirement articulated in the MMS related to the validation step, for rating models in particular, institutions should ensure that validation exercises include the following components:
 
 (i)
 
Development data: A review of the data collection and filtering process performed during the development phase and/or the last recalibration. In particular, this should cover the definition of default and data quality.
 (ii)
 
Model usage: A review of the governance surrounding model usage. In particular, the validator should comment on (a) the frequency of rating issuance, (b) the governance of rating production, and (c) the variability of ratings produced by the model. The validator should also liaise with the credit department to form a view on (d) the quality of financial inputs and (e) the consistency of the subjective inputs and the presence of potential bias.
 (iii)
 
Rating override: A review of rating overrides. This point does not apply to newly developed models.
 (iv)
 
Model design: A description of the model design and its mathematical formulation. A view on the appropriateness of the design, the choice of factors and their transformations.
 (v)
 
Key assumptions: A review of the appropriateness of the key assumptions, including the default definition, the segmentation and the rating scale employed when developing the model.
 (vi)Validation data: The description of the data set employed for validation.
 (vii)
 
Quantitative review: An analysis of the key quantitative indicators covering, at a minimum, the model stability, discriminatory power, sensitivity and calibration. This analysis should cover the predictive power of each quantitative and subjective factor driving the rating.
 (viii)
 
Documentation: A review on the quality of the documentation surrounding the development phase and the modelling decisions.
 (ix)
 
Suggestions: When deemed appropriate, the validator can make suggestions for defect remediation to be considered by the development team.