Skip to main content
  • 2 Rating Models

    • 2.1 Scope

      2.1.1
       
      The vast majority of institutions employ rating models to assess the credit worthiness of their obligors. Rating models provide essential metrics used as foundations to multiple core processes within institutions. Ratings have implications for key decisions, including but not limited to, risk management, provisioning, pricing, capital allocation and Pillar II capital assessment. Institutions should pay particular attention to the quality of their rating models and subsequent PD models, presented in the next section.
       
      2.1.2
       
      Inadequate rating models can result in material financial impacts due to a potentially incorrect estimation of credit risk. The CBUAE will pay particular attention to suitability of the design and calibration of rating and PD models. Rating models that are continuously underperforming even after several recalibrations should be replaced. These models should no longer be used for decision making and reporting.
       
      2.1.3
       
      For the purpose of the MMG, a rating and a score should be considered as identical concepts, that is a numerical quantity without units representing the relative creditworthiness of an obligor or a facility on predefined scale. The main objective of rating models is to segregate obligors (or facilities) that are likely to perform under their current contractual obligations from the ones that are unlikely to perform, given a set of information available at the rating assessment date.
       
      2.1.4
       
      The construction of rating models is well documented by practitioners and in academic literature. Therefore, it is not the objective of this section to elaborate on the details of modelling techniques. Rather, this section focuses on minimum expected practices and the challenging points that should attract institutions’ attention.
       
    • 2.2 Governance and Strategy

      2.2.1
       
      The management of rating models should follow all the steps of the model life-cycle articulated in the MMS. The concept of model ownership and independent validation is particularly relevant to rating models due to their direct business implications.
       
      2.2.2
       
      It is highly recommended that institutions develop rating models internally based on their own data. However, in certain circumstances such as for low default portfolios, institutions may rely on the support from third party providers. This support can take several forms that are presented below through simplified categorisation. In all cases, the management and calibration of models should remain the responsibility of institutions. Consequently, institutions should define, articulate and justify their preferred type of modelling strategy surrounding rating models. This strategy will have material implications on the quality, accuracy and reliability of the outputs.
       
      2.2.3
       
      The choice of strategy has a material impact on the methodology employed. Under all circumstances, institutions remain accountable for the modelling choices embedded in their rating models and their respective calibrations.
       
      2.2.4
       
      Various combinations of third party contributions exist. These can be articulated around the supplier’s contribution to the model development, the IT system solution and/or the data for the purpose of calibration. Simplified categories are presented hereby, for the purpose of establishing minimum expected practices:
       
       (i)
       
      Type 1 – Support for modelling: A third party consultant is employed to build a rating model based on the institution’s own data. The IT infrastructure is fully developed internally. In this case, institutions should work in conjunction with consultants to ensure that sufficient modelling knowledge is retained internally. Institutions should ensure that the modelling process and the documentation are compliant with the principles articulated in the MMS.
       (ii)
       
      Type 2 – Support for modelling and infrastructure: A third party consultant provides a model embedded in a software that is calibrated based on the institution’s data. In this case, the institution has less control over the design of the rating model. The constraints of such approach are as follows:
       
        a.
       
      Institutions should ensure that they understand the modelling approach being provided to them.
        b.
       
      Institutions should fully assess the risks related to using a system solution provided by external parties. At a minimum, this assessment should be made in terms of performance, system security and stability.
        c.
       
      Institutions should ensure that a comprehensive set of data is archived in order to perform validations once the model is implemented. This data should cover both the financial and non-financial characteristics of obligors and the performance data generated by the model. The data should be stored at a granular level, i.e. at a factor level, in order to fully assess the performance of the model.
       
       (iii)
       
      Type 3 – Support of modelling, infrastructure and data: In addition to Type 2 support, a third party consultant provides data and/or a ready-made calibration. This is the weakest form of control by institutions. For such models, the institution should demonstrate that additional control and validation are implemented in order to reduce Model Risk. Immediately after the model implementation, the institution should start collecting internal data (where possible) to support the validation process. Such validation could result in a material shift in obligors’ rating and lead to financial implications.
       (iv)
       
      Type 4 – Various supports: In the case of various supports, the minimum expected practices are as follows:
       
        a.
       
      If a third party provides modelling services, institutions should ensure that sufficient knowledge is retained internally.
        b.
       
      If a third party provides software solutions, institutions should ensure that they have sufficient controls over parameters and that they archive data appropriately.
        c.
       
      If a third party provides data for calibration, institutions should take the necessary steps to collect internal data in accordance with the data management framework articulated in the MMS.
       
      2.2.5
       
      In conjunction with the choice of modelling strategy, institutions should also articulate their modelling method of rating models. A range of possible approaches can be envisaged between two distinct categories: (i) data-driven statistical models that can rely on both quantitative and qualitative (subjective) factors, or (ii) expert-based models that rely only on views from experienced individuals without the use of statistical data. Between these two categories, a range of options exist. Institutions should consciously articulate the rationale for their modelling approach.
       
      2.2.6
       
      Institutions should aim to avoid purely expert based models, i.e. models with no data inputs. Purely expert-based models should be regarded as the weakest form of models and therefore should be seen as the least preferable option. If the portfolio rated by such a model represents more than 10% of the institution’s loan book (other than facilities granted to governments and financial institutions), then the institution should demonstrate that additional control and validation are implemented in order to reduce Model Risk. It should also ensure that Senior Management and the Board are aware of the uncertainty arising from such model. Immediately after the model implementation, the institution should start collecting internal data to support the validation process.
       
    • 2.3 Data Collection and Analysis

      2.3.1
       
      Institutions should manage and collect data for rating models, in compliance with the MMS. The data collection, cleaning and filtering should be fully documented in such way that it can be traced by any third party.
       
      2.3.2
       
      A rigorous process for data collection is expected. The type of support strategy presented in earlier sections has no implications on the need to collect data for modelling and validation.
       
      2.3.3
       
      For the development of rating models, the data set should include, at a minimum, (i) the characteristics of the obligors and (ii) their performance, i.e. whether they were flagged as default. For each rating model, the number of default events included in the data sample should be sufficiently large to permit the development of a robust model. This minimum number of defaults will depend on business segments and institutions should demonstrate that this minimum number is adequate. If the number of defaults is too small, alternative approaches should be considered.
       
      2.3.4
       
      At a minimum, institutions should ensure that the following components of the data management process are documented. These components should be included in the scope of validation of rating models.
       
       (i)Analysis of data sources,
       (ii)Time period covered,
       (iii)Descriptive statistics about the extracted data,
       (iv)Performing and non-performing exposures,
       (v)Quality of the financial statements collected,
       (vi)Lag of financial statements,
       (vii)Exclusions and filters, and
       (viii)
       
      Final number of performing and defaulted obligors by period.
       
    • 2.4 Segmentation

      2.4.1
       
      Segmentation means splitting a statistical sample into several groups in order to improve the accuracy of modelling. This concept applies to any population of products or customers. The choice of portfolio, customer and/or product segmentation has a material impact on the quality of rating models. Generally, the behavioural characteristics of obligors and associated default rates depend on their industry and size (for wholesale portfolios) and on product types (for retail portfolios). Consequently, institutions should thoroughly justify the segmentation of their rating models as part of the development process.
       
      2.4.2
       
      The characteristics of obligors and/or products should be homogeneous within each segment in order to build appropriate models. First, institutions should analyse the representativeness of the data and pay particular attention to the consistency of obligor characteristics, industry, size and lending standards. The existence of material industry bias in data samples should result in the creation of a rating model specific to that industry. Second, the obligor sample size should be sufficient to meet minimum statistical performance. Third, definition of default employed to identify default events should also be homogeneous across the data sample.
       
    • 2.5 Default Definition

      2.5.1
       
      Institutions should define and document two definitions of default, employed in two different contexts: (i) for the purpose of rating model development and (ii) for the purpose of estimating and calibrating probabilities of defaults. These two definitions of default can be identical or different, if necessary. The scope of these definitions should cover all credit facilities and all business segments of the institution. In this process, institutions should apply the following principles.
       
      2.5.2
       
      For rating models: The definition of default in the context of a rating model is a choice made to achieve a meaningful discrimination between performing and non-performing obligors (or facilities). The terminology ‘good’ and ‘bad’ obligors is sometimes employed by practitioners in the context of this discrimination. Institutions should define explicitly the definition of default used as the dependent variable when building their rating models.
       
       (i)
       
      This choice should be guided by modelling considerations, not by accounting considerations. The level of conservatism embedded in the definition of default used to develop rating models has no direct impact on the institution’s P&L. It simply supports a better identification of customers unlikely to perform.
       (ii)
       
      Consequently, institutions can choose amongst several criteria to identify default events in order to maximise the discriminatory power of their rating models. This choice should be made within boundaries. At a minimum, they should rely on the concept of days-past-due (“DPD”). An obligor should be considered in default if its DPD since the last payment due is greater or equal to 90 or if it is identified as defaulted by the risk management function of the institution.
       (iii)
       
      If deemed necessary, institutions can use more conservative DPD thresholds in order to increase the predictive power of rating models. For low default portfolios, institutions are encouraged to use lower thresholds, such as 60 days in order to capture more default events. In certain circumstances, restructuring events can also be included to test the power of the model to identify early credit events.
       
      2.5.3
       
      For PD estimation: The definition of default in the context of PD estimation has direct financial implications through provisions, capital assessment and pricing.
       
       (i)
       
      This choice should be guided by accounting and regulatory principles. The objective is to define this event in such a way that it reflects the cost borne by institutions upon the default of an obligor.
       (ii)
       
      For that purpose, institutions should employ the definition of default articulated in the CBUAE credit risk regulation, separately from the MMS and MMG. As the regulation evolves, institutions should update the definition employed for modelling and recalibrate their models.
       
    • 2.6 Rating Scale

      2.6.1
       
      Rating models generally produce an ordinal indicator on a predefined scale representing creditworthiness. The scores produced by each models should be mapped to a fixed internal rating scale employed across all aspects of credit risk management, in particular for portfolio management, provision estimation and capital assessment. The rating scale should be the result of explicit choices that should be made as part of the model governance framework outlined in the MMS. At a minimum, the institution’s master rating scale should comply with the below principles:
       
       (i)
       
      The granularity of the scale should be carefully defined in order to support credit risk management appropriately. An appropriate balance should be found regarding the number of grades. A number of buckets that is too small will reduce the accuracy of decision making. A number of buckets that is too large will provide a false sense of accuracy and could be difficult to use for modelling.
       (ii)
       
      Institutions should ensure that the distribution of obligors (or exposures) spans across most rating buckets. High concentration in specific grades should be avoided, or conversely the usage of too many grades with no obligors should also be avoided. Consequently, institution may need to redefine their rating grades differently from rating agencies’ grades, by expanding or grouping certain grades.
       (iii)
       
      The number of buckets should be chosen in such a way that the obligors’ probability of default in each grade can be robustly estimated (as per the next section on PD models).
       (iv)
       
      The rating scale from external rating agencies may be used as a benchmark, however their granularity may not be the most appropriate for a given institution. Institutions with a large proportion of their portfolio in non-investment grade rating buckets should pay particular attention to bucketing choices. They are likely to require more granular buckets in this portion of the scale to assess their risk more precisely than with standard scales from rating agencies.
       (v)
       
      The choice of an institution’s rating scale should be substantiated and documented. The suitability of rating scale should be assessed on a regular basis as part of model validation.
       
    • 2.7 Model Construction

      2.7.1
       
      The objective of this section is to draw attention to the key challenges and minimum expected practices to ensure that institutions develop effective rating models. The development of retail scorecards is a standardised process that all institutions are expected to understand and implement appropriately on large amounts of data. Wholesale rating models tend to be more challenging due to smaller population sizes and the complexity of the factors driving defaults. Consequently, this section related to model construction focuses on wholesale rating models.
       
      2.7.2
       
      Wholesale rating models should incorporate, at a minimum, financial information and qualitative inputs. The development process should include a univariate analysis and a multivariate analysis, both fully documented. All models should be constructed based on a development sample and tested on a separate validation sample. If this is not possible in the case of data scarcity, the approach should be justified and approved by the validator.
       
      2.7.3
       
      Quantitative factors: These are characteristics of the obligors that can be assessed quantitatively, most of which are financial variables. For wholesale rating models, institutions should ensure that the creation of financial ratios and subsequent variable transformations are rigorously performed and clearly documented. The financial variables should be designed to capture the risk profile of obligors and their associated financing. For instance, the following categories of financial ratios are commonly used to assess the risk of corporate lending: operating performance, operating efficiency, liquidity, capital structure, and debt service.
       
      2.7.4
       
      Qualitative subjective factors: These are characteristics of the obligor that are not easily assessed quantitatively, for instance the experience of management or the dependency of the obligors on its suppliers. The following categories of subjective factors are commonly used to assess the risk of corporate lending: industry performance, business characteristics and performance, management character and experience, and quality of financial reporting and reliability of auditors. The assessment of these factors is generally achieved via bucketing that relies on experts’ judgement. When using such qualitative factors, the following principles should apply:
       
       (i)
       
      Institutions should ensure that this assessment is based upon a rigorous governance process. The collection of opinions and views from experienced credit officers should be treated as a formal data collection process. The data should be subject to quality control. Erroneous data points should also be removed.
       (ii)
       
      If the qualitative subjective factors are employed to adjust the outcome of the quantitative factors, institutions should control and limit this adjustment. Institutions should demonstrate that the weights given to the expert-judgement section of the model is appropriate. Institutions should not perform undue rating overrides with expert judgement.
       
      2.7.5
       
      Univariate analysis: In the context of rating model development, this step involves assessing the discriminatory power of each quantitative factor independently and assessing the degree of correlation between these quantitative factors.
       
       (i)
       
      The assessment of the discriminatory power should rely on clearly defined metrics, such as the accuracy ratio (or Gini coefficient). Variables that display no relationship or counterintuitive relationships with default rates should preferably be excluded. They can be included in the model only after a rigorous documentation of the rationale supporting their inclusion.
       (ii)
       
      Univariate analysis should also involve an estimation of the correlations between the quantitative factors with the aim to avoid multicolinearity in the next step of the development.
       (iii)
       
      The factors should be ranked according to their discriminatory power. The development team should comment on whether the observed relationship is meeting economic and business expectations.
       
      2.7.6
       
      Multivariate analysis: This step involves establishing a link between observed defaults and the most powerful factors identified during the univariate analysis.
       
       (i)
       
      Common modelling techniques include, amongst others, logistic regressions and neural networks. Institutions can chose amongst several methodologies, provided that the approach is fully understood and documented internally. This is particularly relevant if third party consultants are involved.
       (ii)
       
      Institutions should articulate clearly the modelling technique employed and the process of model selection. When constructing and choosing the most appropriate model, institutions should pay attention to the following:
       
        (a)
       
      The number of variables in the model should be chosen to ensure a right balance. An insufficient number of variables can lead to a sub-optimal model with a weak discriminatory power. An excessive number of variables can lead to overfitting during the development phase, which will result in weak performance subsequently.
        (b)
       
      The variables should not be too correlated. Each financial ratio should preferably be different in substance. If similar ratios are included, a justification should be provided and overfitting should be avoided.
        (c)
       
      In the case of bucketing of financial ratios, the defined cut-offs should be based on relevant peer comparisons supported by data analysis, not arbitrarily decided.
       
    • 2.8 Model Documentation

      2.8.1
       
      Rigorous documentation should be produced for each rating model as explained in the MMS. The documentation should be sufficiently detailed to ensure that the model can be fully understood and validated by any independent party.
       
      2.8.2
       
      In addition to the elements articulated in the MMS, the following components should be included:
       
       (i)
       
      Dates: The model development date and implementation date should be explicitly mentioned in all rating model documentation.
       (ii)
       
      Materiality: The model materiality should be quantified, for instance as the number of rated customers and their total corresponding gross exposure.
       (iii)
       
      Overview: An executive summary with the model rating strategy, the expected usage, an overview of the model structure and the data set employed to develop and test the model.
       (iv)
       
      Key modelling choices: The default definition, the rating scale and a justification of the chosen segmentation as explained in earlier sections.
       (v)
       
      Data: A description of the data employed for development and testing, covering the data sources and the time span covered. The cleaning process should be explained including the filter waterfall and/or any other adjustments used.
       (vi)
       
      Methodology: The development approach covering the modelling choices, the assumptions, limits, the parameter estimation technique. Univariate and multivariate analyses discussing in detail the construction of factors, their transformation and their selection.
       (vii)
       
      Expert judgement inputs: All choices supporting the qualitative factors. Any adjustments made to the variables or the model based on expert opinions. Any contributions from consulted parties.
       (viii)
       
      Validation: Details of testing and validation performed during the development phase or immediately after.
       
    • 2.9 Usage of Rating Models

      2.9.1
       
      Upon the roll-out of a new rating model and/or a newly recalibrated (optimised) rating model, institutions should update client ratings as soon as possible. Institutions should assign new ratings with the new model to 70% of the existing obligors (entering the model scope) within six (6) months and to 95% of the existing obligors within nine (9) months. The assignment of new ratings should be based on financials that have been updated since the issuance of the previous rating, if they exist. Otherwise prior financials should be used. This expectation applies to wholesale and retail models.
       
      2.9.2
       
      In order to achieve this client re-rating exercise in a short timeframe, institutions are expected to rate clients in batches, performed by a team of rating experts, independently from actual, potential or perceived business line interests.
       
      2.9.3
       
      Institutions should put in place a process to monitor the usage of rating models. At a minimum, they should demonstrate that the following principles are met:
       
       (i)
       
      All ratings should be archived with a date that reflects the last rating update. This data should be stored in a secure database destined to be employed on a regular basis to manage the usage of rating models.
       (ii)
       
      The frequency of rating assignment should be tracked and reported to ensure that all obligors are rated appropriately in a timely fashion.
       (iii)
       
      Each rating model should be employed on the appropriate type of obligor defined in the model documentation. For instance, a model designed to assess large corporates should not be used to assess small enterprises.
       (iv)
       
      Institutions should ensure that the individuals assigning and reviewing ratings are suitably trained and can demonstrate a robust understanding of the rating models.
       (v)
       
      If the ratings are assigned by the business lines, these should be reviewed and independently signed-off by the credit department to ensure that the estimation of ratings is unbiased from short term potential or perceived business line interests.
       
    • 2.10 Rating Override

      2.10.1
       
      In the context of the MMG, rating override means rating upgrade or rating downgrade. Overrides are permitted; however, they should follow a rigorously documented process. This process should include a clear governance mirroring the credit approval process based on obligor type and exposure materiality. The decision to override should be controlled by limits expressed in terms of number of notches and number of times a rating can be overridden. Eligibility criteria and the causes for override should be clearly documented. Causes may include, amongst others: (i) events specific to an obligor, (ii) systemic events in a given industry or region, and/or (iii) changes of a variable that is not included in the model.
       
      2.10.2
       
      Rating overrides should be monitored and be included in the model validation process. The validator should estimate the frequency of overrides and the number of notches between the modelled rating and the new rating. A convenient approach to monitor overrides is to produce an override matrix.
       
      2.10.3
       
      In some circumstances, the rating of a foreign obligor should not be better than the rating of its country of incorporation. Such override decision should be justified and documented.
       
      2.10.4
       
      A contractual guarantee of a parent company can potentially result in the rating enhancement of an obligor, but conditions apply:
       
       (i)
       
      The treatment of parental support for a rating enhancement should be recognised only based on the production of an independent legal opinion confirming the enforceability of the guarantee upon default. The rating enhancement should only apply to the specific facility benefiting from the guarantee. The process for rating enhancement should be clearly documented. For the avoidance of doubt, a sole letter of intent from the parent company should not be considered as a guarantee for enforceability purpose. A formal legal guarantee is the only acceptable documentation.
       (ii)
       
      For modelling purpose, an eligible parent guarantee can be reflected in the PD or the LGD of the facility benefiting from this guarantee. If the rating of the facility is enhanced, then the guarantee will logically be reflected in the PD parameter. If the rating of the obligor is not enhanced but the guarantee is deed eligible, then the guarantee can be reflected in the LGD parameter. The rationale behind such choice should be fully documented.
       
    • 2.11 Monitoring and Validation

      2.11.1
       
      Institutions should demonstrate that their rating models are performing over time. All rating models should be monitored on a regular basis and independently validated according to all the principles articulated in the MMS. For that purpose, institutions should establish a list of metrics to estimate the performance and stability of models and compare these metrics against pre-defined limits.
       
      2.11.2
       
      The choice of metrics to validate rating models should be made carefully. These metrics should be sufficiently granular and capture performance through time. It is highly recommended to estimate the change in the model discriminatory power through time, for instance by considering a maximum acceptable drop in accuracy ratio.
       
      2.11.3
       
      In addition to the requirement articulated in the MMS related to the validation step, for rating models in particular, institutions should ensure that validation exercises include the following components:
       
       (i)
       
      Development data: A review of the data collection and filtering process performed during the development phase and/or the last recalibration. In particular, this should cover the definition of default and data quality.
       (ii)
       
      Model usage: A review of the governance surrounding model usage. In particular, the validator should comment on (a) the frequency of rating issuance, (b) the governance of rating production, and (c) the variability of ratings produced by the model. The validator should also liaise with the credit department to form a view on (d) the quality of financial inputs and (e) the consistency of the subjective inputs and the presence of potential bias.
       (iii)
       
      Rating override: A review of rating overrides. This point does not apply to newly developed models.
       (iv)
       
      Model design: A description of the model design and its mathematical formulation. A view on the appropriateness of the design, the choice of factors and their transformations.
       (v)
       
      Key assumptions: A review of the appropriateness of the key assumptions, including the default definition, the segmentation and the rating scale employed when developing the model.
       (vi)Validation data: The description of the data set employed for validation.
       (vii)
       
      Quantitative review: An analysis of the key quantitative indicators covering, at a minimum, the model stability, discriminatory power, sensitivity and calibration. This analysis should cover the predictive power of each quantitative and subjective factor driving the rating.
       (viii)
       
      Documentation: A review on the quality of the documentation surrounding the development phase and the modelling decisions.
       (ix)
       
      Suggestions: When deemed appropriate, the validator can make suggestions for defect remediation to be considered by the development team.