Skip to main content
  • 10 Independent Validation

    • 10.1 Objective and Scope

      10.1.1
       
      The independent validation of models is a key step of their life-cycle management. The objective is to undertake a comprehensive review of models in order to assess whether they are performing as expected and in line with their designed objective. While monitoring and validation are different processes run at different frequencies, the content of the monitoring process forms a subset of the broader scope covered by the validation process. Therefore, when available, the results of the monitoring process must be used as inputs into the validation process.
       
      10.1.2
       
      Institutions must put in place a rigorous process with defined responsibilities, metrics, limits and reporting in order to meet the requirements of independent model validation. Part of the metrics must be common between the monitoring process and the validation process. Independent validation must be applied to all models including statistical models, deterministic models and expert-based models whether they have been developed internally or acquired from a third party provider.
       
      10.1.3
       
      The validation scope must cover both a qualitative validation and a quantitative validation. Both validation approaches complement each other and must not be considered separately. A qualitative validation alone is not sufficient to be considered as a complete validation since it does not constitute an appropriate basis on which modelling decisions can be made. If insufficient data is available to perform the quantitative validation, the validation process should be flagged as incomplete to the Model Oversight Committee, which should then make a decision regarding the usage of the model in light of the uncertainty and the Model Risk associated with such partially validated model.
       
      10.1.4
       
      The scope of the validation must be comprehensive and clearly stated. The scope must include all relevant model features that are necessary to assess whether the model produces reliable outputs to meet its objectives. If a validation is performed by a third party, institutions must ensure that the validation scope is comprehensive. It may happen that an external validator cannot fully assess all relevant aspects of a model for valid reasons. In this case, institutions are responsible to perform the rest of the validation and to ensure that the scope is complete.
       
      10.1.5
       
      A validation exercise must result in an independent judgement with a clear conclusion regarding the suitability of the model. A mere description of the model features and performance does not constitute a validation. Observations must be graded according to an explicit scale including, but not limited to, ‘high severity’, ‘medium severity’ and ‘low severity’. The severity of model findings must reflect the degree of uncertainty surrounding the model outputs, independently of the model materiality, size or scope. As a second step, this degree of uncertainty should be used to estimate Model Risk, since the latter is defined as the combination of model uncertainty and materiality.
       
      10.1.6
       
      In addition to the finding severity, institutions must create internal rating scales to assess the overall performance of each model. This performance rating should be a key input in the decision process in each model step of the model life-cycle.
       
    • 10.2 Responsibilities

      10.2.1
       
      Institutions must put in place a rigorous process to ensure that models are independently validated either by an internal dedicated team or by a third party provider, or both. If model validation is assigned to a third party, institutions remain the owners of validation reports and must take appropriate action upon the issuance of these reports.
       
      10.2.2
       
      In order to ensure its independence and efficiency, the party responsible for model validation (“validator”) must be able to demonstrate all the following characteristics. If the validator does not possess all of those, the validation reports must not be considered independent and/or robust enough and therefore must not be used for decision-making.
       
       (i)Advanced understanding of model methodologies and validation techniques, that is sufficiently mature to allow the formulation of independent judgement.
       (ii)
       
      The expertise and freedom to express, hold and defend views that are different from the development team and from management. The ability to present those views to the Model Oversight Committee, Senior Management and the Board.
       (iii)
       
      The ability to perform independent research and articulate alternative proposals.
       
      10.2.3
       
      The internal audit function is responsible to verify that model validation is performed appropriately by the assigned party, following a regular audit cycle. At a minimum, the audit function must cover the following scope:
       
       (i)
       
      Review the governance surrounding the internal validation process and assess its independence in light of the MMS.
       (ii)
       
      Form a view regarding the suitability of the depth and scope of the work performed by the validation team, also in light of the MMS.
       (iii)
       
      Review the relevance, frequency and effectiveness of the validation process. At a minimum, the auditor must review the list of findings issued by the validator and assess if the timing necessary for remediation is appropriate.
       
      10.2.4
       
      The internal audit function should employ third party experts to assist on technical matters until it can demonstrate that it can perform an adequate review of the model validation process without technical support. If the audit team employs supporting experts, it remains the sole owner of the conclusions of the audit report.
       
    • 10.3 Qualitative Validation

      10.3.1
       
      The independent validation process must include a review of the model conceptual soundness, design and suitability of the development process. The scope of the qualitative validation varies depending on the statistical or deterministic nature of the model. This must include, at a minimum, a review of the following elements:
       
       (i)The model governance and decision process,
       (ii)The model conceptual soundness, purpose and scope,
       (iii)The methodology including the mathematical construction,
       (iv)
       
      The suitability of the output in terms of economic intuition and business sense as defined in the MMS, and
       (v)The suitability of the implementation (when the model is implemented)
       In addition, for statistical models:
       (vi)The choice of variables and their respective transformation,
       (vii)The suitability of the data in terms of sources, filters and time period, and
       (viii)
       
      The suitability of the sampling techniques, if any.
       
    • 10.4 Quantitative Validation

      10.4.1
       
      The quantitative validation must assess the suitability of the model output with respect to the objective initially assigned to the model. This process must rely on numerical analyses to derive its conclusions. Such validation should include a set of dedicated research to arrive at an independent judgement. Under certain circumstances, partial model replication and/or a challenger model may be necessary to form a judgement.
       
      10.4.2
       
      The set of metrics employed for model validation must at least include those employed for monitoring. As a first step, the validator must make a review of the monitoring reports and their observations. In addition, institutions should employ a broader spectrum of performance metrics to fully assess model performance, since the scope of the validation process is larger than that of monitoring.
       
      10.4.3
       
      The assessment of model performance must cover, at a minimum, the following components, applicable to both statistical and deterministic models:
       
       (i)
       
      Accuracy and conservatism: The ability of a model to generate predictions that are close to the realised values, observed before and after the model development phase. For models whose results are subject to material uncertainty, the validator should assess if sufficient conservatism included in the model calibration.
       (ii)
       
      Stability and robustness: Whilst there are theoretical differences between stability and robustness, for the purpose of this MMS, this refers to the ability of a model to withstand perturbations, i.e. maintain its accuracy despite variability in its inputs or when the modelling assumptions are not fully satisfied. In particular, this means the ability of a model to generate consistent and comparable results through time.
       (iii)
       
      Controlled sensitivity: This relates to the model construction. Model sensitivity refers to the relationship between a change in the model inputs and the observed change in the model results. The sensitivity of the output to a change in inputs must be logical, fully understood and controlled.
       
      10.4.4
       
      The quantitative validation process should include a review of the suitability, relevance and accuracy of following components.
       
       For both statistical and deterministic models:
       (i)The implementation,
       (ii)The adjustments and scaling factors, if any,
       (iii)The ‘hard-coded’ rules and mappings,
       (iv)The extrapolations and interpolations, if any, and
       (v)The sensitivities to changes in inputs,
       In addition for statistical models only:
       (vi)The model coefficients,
       (vii)The statistical accuracy of the outputs,
       (viii)The raw data as per the DMF requirements, and
       (ix)The historical time series,
       In addition, for deterministic models only:
       (x)A decomposition of the model drivers and their associated sensitivity, and
       (xi)
       
      A partial replication, when possible.
       
    • 10.5 Review Frequency

      10.5.1
       
      All models must be validated at regular frequencies appropriate to model types and tiers. The review periods should not be longer than the ones presented in Table 2 below. More frequent reviews can be implemented at the discretion of institutions, depending on model types and complexity. More frequent reviews may also be necessary in the case of unforeseen circumstances, for instance related to changes in model usage and/or changes in the economic environment. Less frequent reviews are possible in certain circumstances, but they should be justified and will be subject to assessment from the CBUAE.
       
      10.5.2
       
      The dates corresponding to the last monitoring and validation exercises must be tracked rigorously, included in the model inventory and reported to the Model Oversight Committee at least every quarter. The internal audit function must ensure that this process is implemented effectively by the model owner and the validator.
       

       
      Table 2: Minimum monitoring and validation frequencies for most common models
       
        Tier 1 modelsTier 2 models
      PortfolioModel TypeMonitoringValidationMonitoringValidation
      WholesaleRating1 year3 years2 years5 years
      WholesalePD term structure1 year3 years2 years5 years
      WholesaleMacro-PD1 year2 years2 years3 years
      WholesaleLGD1 year3 years2 years5 years
      WholesaleMacro-LGD1 year2 years2 years3 years
      RetailScorecard3 months1 year6 months3 years
      RetailPD term structure1 year2 years2 years3 years
      RetailMacro-PD1 year2 years2 years3 years
      RetailLGD1 year2 years2 years3 years
      RetailMacro-LGD1 year2 years2 years3 years
      EADEAD1 month3 years2 years5 years
      Trading BookVaR and related models3 months3 years*6 months4 years*
      Trading BookExposure and xVA1 year3 years*6 months4 years*
      MultipleValuation1 year3 years*n/a4 years*
      MultipleConcentration1 year3 year**n/an/a
      MultipleIRRBB1 year3 year**n/an/a
      MultipleOther Pillar II models1 year3 year**n/an/a
      MultipleCapital forecasting1 year3 year**n/an/a
      MultipleLiquidity1 year3 year**n/an/a

       

      10.5.3
       
      Where [*] is indicated in table 2 above: For pricing and traded risk models such as VaR, exposure and xVA models, a distinction should be made between (i) the model mechanics, (ii) the calibration and (iii) the associated market data. The mechanics should be reviewed at least every 3 to 4 years ; however the suitability of the calibration and the market data should be reviewed more frequently as part of the model monitoring process. In addition to these frequencies, any exceptional market volatility should trigger a revision of all model decisions.
       
      10.5.4
       
      Where [**] is indicated in table 2 above: For deterministic models such as capital forecasting, concentration and IRRBB models, a distinction should also be made between (i) the model mechanics and (ii) the input data. Whilst the mechanics (methodology and system) can be assessed every 3 years, the calibration must be reviewed yearly in order to assess the appropriate usage of the model with a new set of inputs. This yearly frequency is motivated by the strategic usage of such models in the ICAAP.
       
      10.5.5
       
      For models other than those mentioned in table 2 above, institutions must establish a schedule for monitoring and validation that is coherent with their nature and their associated Model Risk.
       
    • 10.6 Reporting of Findings

      10.6.1
       
      The analyses and tests performed during the validation of a model must be rigorously documented in a validation report. Validation reports must be practical, action orientated, focused on findings and avoid unnecessary theoretical digressions. A validation report should include, at a minimum, the following components:
       
       (i)The model reference number, nomenclature, materiality and classification,
       (ii)The implementation date, the monitoring dates and the last validation date, if any,
       (iii)A clear list of findings with their associated severity,
       (iv)Suggestions for remediation, when appropriate,
       (v)The value of each performance indicator with its associated limit,
       (vi)The results of the qualitative review as explained above,
       (vii)The results of the quantitative review as explained above,
       (viii)The model risk rating, and
       (ix)
       
      A conclusion regarding the overall performance.
       
      10.6.2
       
      The model validation report must refer to the steps of the model life-cycle. Its conclusion should be one of the following possible outcomes, as mentioned in the model governance section:
       
       (i)Leave the model unchanged,
       (ii)Use a temporary adjustment while establishing a remediation plan,
       (iii)Recalibrate the model,
       (iv)Redevelop a new model, or
       (v)
       
      Withdraw the model without further redevelopment.
       
    • 10.7 Remediation Process

      10.7.1
       
      Institutions must put in place effective processes to manage observations and findings arising from independent validation exercises. The remediation process must be structured and fully documented in the institution’s policy. The findings need to be clearly recorded and communicated to all model stakeholders including, at least, the development team, the members of the Model Oversight Committee and Senior Management. The members of the committee must agree on a plan to translate the findings into actionable items which must be addressed in a timely fashion.
       
      10.7.2
       
      If an institution decides not to address some model defects, it must identify, assess and report the associated Model Risk. It must also consider retiring and/or replacing the model or implement some other remediation plan. Such decision may result in additional provisions and/or capital buffers and will be subject to review by the CBUAE.
       
      10.7.3
       
      Upon completion, the validation report must be discussed between the validator and the development team, with the objective to reach a common understanding of the model weaknesses and their associated remediation. Both parties are expected to reach a conclusion on the validation exercise, its outcomes and its remediation plan. The following must be considered:
       
       (i)
       
      The views expressed by both parties must be technical, substantiated and documented. The development team and/or the model owner should provide a response to all the observations and findings raised by the validator.
       (ii)
       
      The views expressed by both parties must aim towards a practical resolution, with the right balance between theoretical requirements vs. practical constraints.
       (iii)
       
      The resolution of modelling defects must be based on minimising the estimated Model Risk implicit in each remediation option.
       (iv)
       
      Outstanding divergent views between both parties should be resolved by the Model Oversight Committee.
       
      10.7.4
       
      For each finding raised by the validator, the following must be submitted to the Model Oversight Committee for consideration: (i) substantiated evidence from the validator, (ii) the opinion of the development team, (iii) a suggested remediation, if deemed necessary, and (iv) a remediation date, if applicable. The Model Oversight Committee must decide to proceed with one of the options listed in the Article 10.6.2 above. When making a choice amongst the various options, the Committee must consider their respective Model Risk and associated financial implications.
       
      10.7.5
       
      The validator must keep track of the findings and remediating actions and report them to the Model Oversight Committee and Senior Management on a quarterly basis, and to the Board (or to a specialised body of the Board) on a yearly basis. Such status reports must cover all models and present the outstanding Model Risk. The reports must be reviewed by the internal audit function as part of their audit review. Particular attention should be given to repeated findings from one validation to the next.
       
      10.7.6
       
      If the institution does not have an internal validation team, then reporting of model findings and remediation can be performed by another function within the institution. However, the internal audit function must regularly review the reporting process to ensure that such reporting is an accurate representation of the status of model performance.
       
      10.7.7
       
      Institutions must aim to resolve model findings promptly in order to mitigate Model Risk. For that purpose, institutions must develop a process to manage defect remediation effectively. This process must include the following principles:
       
       (i)
       
      High severity findings must be addressed immediately with tactical solutions, irrespective of the model Tier. Such solutions can take the form of temporary adjustment, overlay and/or scaling in order to reduce the risk of inaccurate model outputs and introduce a degree of conservatism. Tactical solutions must not become permanent, must be associated with an expiration date and must cease after the implementation of permanent remediation.
       (ii)
       
      Institutions must establish maximum remediation periods per finding severity, per model Tier and per model type. The remediation period must start from the date at which the Model Oversight Committee reaches an agreement on the nature and severity of the finding. For findings requiring urgent attention, an accelerated approval process must be put in place to start remediation work.
       (iii)
       
      Tactical solutions must only be temporary in nature and institutions should aim to fully resolve high severity findings within six (6) months. At a maximum, high severity findings must be resolved no later than twelve (12) months after their identification. High severity findings, not resolved within 6 months must be reported to the Board and to the CBUAE.
       (iv)
       
      When establishing maximum remediation periods, institutions must take into account model types in order to mitigate Model Risk appropriately. For instance, defects related to market risk / pricing models should be remedied within weeks, while defect remediation for rating models could take longer.
       (v)
       
      For each defect, a clear plan must be produced in order to reach timely remediation. Priority should be given to models with greater financial impacts. The validator should express its view on the timing and content of the plan, and the remediation plan should be approved by the Model Oversight Committee.
       
      10.7.8
       
      At the level of the institution, the timing for finding resolution is a reflection of the effectiveness of the validation process and the ability of the institution to manage Model Risk. This will be subject to particular attention from the CBUAE. Exceptions to the time frame defined by institutions must be formally approved by Senior Management upon robust justification and will be reviewed by the CBUAE as part of regular supervision.