Part II – Application of the Standards
The MMS is constructed in such a way that the numbering of each article is sequential and each article is a unique reference across the entire MMS. Therefore the numbering continues from the previous Part.
4 Model Governance
4.1 Overview
4.1.1
Institutions must develop and maintain policies and procedures that support their model management framework. In addition, they must regularly ensure that these policies and procedures are correctly implemented.
4.1.2
In addition to the elements mentioned in Part I, institutions must include the following components in their model governance framework, at a minimum: (i) the definition of model objectives, (ii) steps of model life-cycle, (iii) model inventory, (iv) model ownership, (v) identification of key stakeholders involved in decision-making, (vi) relations with third parties, (vii) adequacy of internal skills, (viii) comprehensive model documentation, and (ix) reporting.
4.2 Model Objectives and Strategy
4.2.1
Institutions must assign a clearly defined objective to each model and include it in the model development documentation.
4.2.2
If stakeholders disagree on the objective of a model, the model must remain under development or be removed from production until the disagreement is resolved.
4.2.3
Institutions must have a defined strategy to meet the objectives of their models. Institutions must distinguish between short term tactical solutions from longer term solutions. Such strategies must be documented and approved by the stakeholders involved in model management, including Senior Management and the Board.
4.2.4
The modelling strategy must clearly articulate the potential contribution of third party consultants to the development, management and validation of models. The outsourcing strategy must be defined and justified, in particular regarding data, systems, calibration and methodology design. If a quantity of portion of modelling work is outsourced, institutions must implement mechanisms to retain controls control over the key elements of modelling.
4.3 Model Life-Cycle
4.3.1
Institutions must manage each model according to a cycle that includes, at a minimum, the following steps.
(i) Development, (ii) Pre-implementation validation, (iii) Implementation, (iv) Usage and monitoring, (v) Independent validation, and (vi)
Recalibration, redevelopment or retirement, if deemed necessary.
4.3.2
The duration and frequency of each step must be specified in advance for each model and documented accordingly.
4.3.3
Upon independent validation and the response from the development team, the following decisions must be considered by the Model Oversight Committee, which must all be thoroughly justified:
(i) Leave the model unchanged, (ii) Use a temporary adjustment while establishing a remediation plan, (iii) Recalibrate the model, (iv) Redevelop or acquire a new model, or (v)
Withdraw the model without further redevelopment.
4.4 Model Inventory and Grouping
4.4.1
Institutions must maintain a comprehensive inventory of all their models employed in production to support decision-making. The inventory must cover internal models and models provided by third parties. It must contain sufficient relevant information to support model management and mitigate Model Risk.
4.4.2
The inventory must cover models both currently in use and employed in the past for production (starting from the implementation of this MMS). Institutions must ensure that they can refer and/or roll back to previously employed models, if necessary. Consequently, institutions must have a model archiving mechanism in place supported by appropriate documentation and IT system infrastructure.
4.4.3
Each model must have a unique nomenclature and identification number that must be explicitly mentioned in any related model documentation. A model with a new calibration must carry a different identification number. Any variation of a model requiring a separate validation or approval should be identified as a separate model.
4.4.4
The model inventory must include, for each model, all the references and documents pertaining to each step of the life-cycle. Amongst others: (i) the dates of each step, including past and planned steps, (ii) the internal party responsible for each step, and (iii) previous validation exercises and audit reviews plus any reference to their respective outcome. Third-party consultants must not be considered as responsible for any step but only considered as supporting their execution. Where consultants have been involved, the details of the consultants must be recorded.
4.4.5
Models must be grouped based on their associated Model Risk.
(i)
At a minimum, institutions must create two groups referred to as Tier 1 and Tier 2 models, with Tier 1 models being more critical than Tier 2 models. If institutions already employ more than two groups, those can be retained for internal purpose. In the context of the MMS and for regulatory purpose, the models deemed less material than Tier 2, must be regarded Tier 2. (ii)
Whilst the grouping decisions are left to the discretion of each institution, they will be reviewed by the CBUAE as part of its supervisory duty. At a minimum, IFRS9 models for large portfolios (measured by exposure) and capital forecasting models must be classified as Tier 1. (iii)
Institutions may prioritise model management by tier once they have established a clear grouping framework based on Model Risk. In the MMS, in the absence of specific reference to model tiers, the requirements apply to all models irrespective of their materiality, as these requirements must be regarded as fundamental building blocks of model management. Where needed, the MMS explicitly refers to model Tier 1 and Tier 2.
4.5 Model Ownership
4.5.1
The concept of model ownership is fundamental to model management. Institutions must ensure that an internal owner with a sufficient level of seniority is assigned to each model at all times.
4.5.2
The owner of a model is accountable for all modelling decisions and for ensuring that the model goes through all the steps of its life-cycle in a timely fashion. In other words, a model owner is not responsible for executing all the steps; however, a model owner must ensure that the steps are performed.
4.5.3
Risk models involving statistical calibration must be owned by the risk department and must not be owned by the business lines to avoid conflicts of interest. Pricing and valuation models used for commercial decisions can be owned by the business lines. Other financial models with no statistical calibration can be owned by the finance department, at the discretion of each institution.
4.6 Stakeholders and Decision Process
4.6.1
A modelling decision is defined as a deliberate choice that relates to each step of the model life-cycle. In particular, key modelling decisions relate to (i) the model strategy, (ii) the choice of data, (iii) the analysis of data, (iv) the methodology and the development process, (v) the calibration, and (vi) the implementation of models. Such decision have material impacts on model outcomes and have financial implications. Consequently, institutions must implement a clear governance process around these decisions.
4.6.2
All parties involved in making decisions required at any step of the model life-cycle must be identified and recorded in the model documentation. Within an institution, individuals may hold several of these roles (i.e. several responsibilities), with the exception of model validation which must remain independent from the other roles. At a minimum, the following roles must be identified for each model:
(i) Model owner, (ii) Model developer, (iii) Model validator, (iv) Model user, (v) Modelling data owner, and (vi)
Model Oversight Committee members.
4.6.3
Institutions must establish a Model Oversight Committee, to whom the stakeholders mentioned at 4.6.2 are accountable. This committee must be established separately from existing risk management committees. Its scope must cover all models across the institution, with the view to manage Model Risk in its entirety. The committee must convene regularly and at a minimum every quarter.
4.6.4
The Model Oversight Committee must provide substantiated decisions related to each step of the model life-cycle and in particular, strategic modelling options. Consequently, the committee members must have a minimum level of technical understanding to be able to contribute to those decisions.
4.6.5
The Model Oversight Committee must be accountable to Senior Management and the Board. The committee must provide an impartial view of the best modelling approach for the institution. It must remain independent from actual, potential or perceived interests of business lines. Therefore, the majority of the Committee members must not be from the business lines. If business views and risk management views related to modelling choices are irreconcilable, Senior Management must make a decision, be accountable for it and provide a clear rationale for it. The final decision must be in compliance with the requirements outlined in the MMS.
4.6.6
At a minimum, the Model Oversight Committee must hold the following responsibilities.
(i) Design the institution’s appetite for Model Risk to be approved by the Board, (ii) Ensure that Model Risk is managed appropriately across the institution, (iii) Escalate modelling decisions when necessary, (iv) Oversee the objective and strategy of each model, (v) Approve the development of new models, (vi) Request the development of new models when necessary, (vii) Approve material modelling decisions throughout the model life-cycle, (viii)
At the end of each cycle, review the validation results and make a choice amongst the options presented in the section 4.3 on model life-cycle.
Whilst some technical aspects of these responsibilities can be delegated to subcommittees, working groups and/or individuals, the Model Oversight Committee must remain the centralised forum where modelling decisions for the whole institution are discussed, made or proposed for escalation. Material modelling decisions must be ultimately approved by the Board.
4.6.7
Other subject matter experts across the institution and third party experts can contribute to each step of the model life-cycle depending on their field of expertise. They can be involved in model design, development and testing. However, their involvement must be viewed as consultative only.
4.6.8
The CRO is responsible for ensuring that Model Risk is managed appropriately. Consequently, as part of his/her duty, the CRO must ensure that:
(i)
Model Risk is appropriately identified, understood, estimated, reported and mitigated across the institution. (ii)
The governance for model management is efficient and appropriate to the size and complexity of the institution. (iii)
The Model Oversight Committee is functioning appropriately and meets the responsibilities outlined in article 4.6.6. (iv)
Material modelling decisions are approved by the Board (or the Board Risk Committee). The Board is adequately informed of Model Risk, the status of model management and the performance of models. (v) A suitable escalation process is in place through the institution and up to the Board. (vi)
The institution employs adequate resources to meet the demands of model management and, where required, escalate identified gaps to Senior Management and/or the Board. (vii) He/she is fully familiar with the requirements articulated in the MMS. (viii)
He/she has sufficient technical understanding to form an opinion about the modelling decisions with material financial implications. (ix)
He/she is sufficiently informed of material modelling decisions, in such a way that he/she can articulate a view about the suitability of these decisions. (x)
Particular attention is given to the quality, completeness and accuracy of the data used to make decisions based on models.
4.7 Third Party Provider
4.7.1
Institutions must remain the owners of their models at all times, under all circumstances. They must remain accountable for all modelling choices, even in the case of support from a third party consultant for any of the steps in the life-cycle.
4.7.2
If modelling support is provided by a third party, institutions must take the necessary steps to transfer knowledge from that third party to internal employees within a given time frame. This requirement applies to any of the steps of the model life-cycle.
4.7.3
Third party providers may offer a range of modelling contributions covering, amongst others, methodological support, system infrastructure, validation services and ready-made calibrations based on external data. Institutions must take the necessary action to fully understand the contributions provided by third parties. This requirement applies to all models and to all risk types.
4.7.4
In the case of methodological support, whilst institutions must operate within the constraints of the acquired model, they must demonstrate that the method is adequate to their portfolios. If a methodology acquired from a third party is not fully understood by the institution, then it must not be considered fit for purpose. If a third party provides a methodology to an institution, any subsequent validation exercise must be performed by an internal or external party independent from the original provider.
4.7.5
If a third party provides a ready-made calibrated model based on external data, such a solution must be justified, based on the following specific circumstances:
(i)
For portfolios and metrics for which an institution is not able to collect sufficient internal data, then externally calibrated models are acceptable. For instance, this applies in the case of low default portfolios or small portfolios for which data collection may not lead to statistically representative samples. (ii)
For portfolios and metrics for which an institution is in a position to collect internal data, then externally calibrated models must not be used. Externally calibrated models are acceptable, only temporarily over the short term until sufficient data is collected. In this case, immediately after the model implementation, institutions must take the necessary actions to (i) collect historical internal data from internal systems and (ii) collect future internal data in order to develop a model internally.
4.8 Internal Skills
4.8.1
Institutions must ensure that they acquire and retain adequate internal knowledge and core competences about modelling techniques. Full model ownership requires that institutions must have an appropriate number of internal employees with technical skills to understand and own models, even with the contribution of third parties. The contribution of external consultants cannot justify a lack of internal technical employees.
4.8.2
All institutions must ensure that they have a minimum number of technical employees to manage models independently of third parties. The skills of these employees must sufficient to cope with the complexity of the models implemented at the institution. If an institution does not have the required internal skills to manage complex models, these models should be simplified or replaced.
4.8.3
For branches or subsidiaries of foreign institutions, the internal technical expertise may reside at the parent group level, which are responsible for the oversight of the local implementation and/or usage of models. The technical experts from the parent entity must also oversee any third parties employed to deliver models for the local entity. The local branches or subsidiaries must nonetheless have employees with sufficient skills to ensure that models are suitably calibrated to the UAE context and meet the CBUAE requirements in this regard.
4.8.4
Knowledge about a model must not be restricted to a single individual in the organisation. Instead, knowledge must be shared amongst several staff members. This is necessary for the purpose of sound decision-making related to modelling choices and to minimise the impact of staff departure on the smooth continuation of model life-cycle execution.
4.8.5
Institutions are expected to recognise the scarcity of technical staff able to genuinely understand and own models. Therefore, they must put in place development plans and initiatives to retain and manage their technical employees appropriately. The strategic management of technical resources must include full and adequate cooperation of the institutions’ human resources function.
4.9 Model Documentation
4.9.1
Dedicated and consistent documentation must be produced for each step of the model life-cycle. The documentation must be sufficiently comprehensive to ensure that an independent party has all the necessary information to assess the suitability of the modelling decisions. In particular, the documentation must make a clear distinction between theoretical considerations, calibration choices and practical implementation considerations.
4.9.2
All model documentation, model management policies and procedures must be an accurate reflection of the institution’s practice and usage. In other words, institutions must ensure that the model attributes described in a modelling document are actually implemented. Any gaps and partial implementation must be recorded, tracked and reported to Senior Management and the Board by the modelling stakeholders. Institutions must have a remediation plan in place to address each of these gaps within an appropriate timeframe.
4.9.3
Institutions must develop internal standards for model documentation across all model types, with rigorous document control. This requirement is particularly relevant for the development and the validation steps. The documentation must be adapted to the type of model, the business context and the step of the life-cycle. At a minimum, all model development documentation must include the following information:
(i)
Document control including the model reference, owners, contributors and key dates of each life-cycle step, (ii) Model materiality in relation to the institution’s risk profile, (iii) Overview of the model strategy, structure and expected usage, (iv) Data set description, when applicable, (v) Methodology and modelling choices related to all the key modelling decisions, (vi) Modelling assumptions, weaknesses and limitations, (vii) Expert judgement inputs if any, (viii) Impact analysis of the new modelling decisions, and (ix)
Implementation process and timing of the new modelling decisions.
4.10 Performance Reporting
4.10.1
Institutions must implement a comprehensive reporting framework to ensure that Model Risk is analysed and assessed for the purpose of implementing risk mitigating measures.
4.10.2
Reporting must be implemented at several levels of the organisation, including to the Model Oversight Committee, the institution’s Risk Committee and the Board. Reporting must be specific and adapted to the nature of the stakeholders. The status of model management and Model Risk across the entire organisation must be presented to the Model Oversight Committee and the institution’s Risk Committee at a minimum on a quarterly basis, and to the Board or a specialised sub-committee of the Board at least on a yearly basis.
4.10.3
Reporting must be designed to support Model Risk management covering the identification, measurement, monitoring and mitigation of these risks. In particular, reporting must cover (i) the status of the model lifecycle for each model, (ii) the results of model performance assessment, (iii) the risks arising from the uncertainty surrounding certain modelling decisions, and (iv) the status and estimation of Model Risk throughout the organisation.
4.10.4
Institutions must comply with model reporting requirements from the CBUAE, as they evolve through time.
4.11 Mergers, Acquisitions and Disposals
4.11.1
If an institution merges with or acquires another institution, it must re-visit all the elements of the model management framework, as part of the integration process. The modelling framework and all the principles of model life-cycle management must be applied consistently across the newly formed institution. In particular, model ownership must be clearly defined. The newly formed institution must have sufficient resources to fully manage the new scope of models.
4.11.2
The scope of models must be re-visited to assess whether there is a degree of overlap between models. Depending on circumstances, models may need to be recalibrated or redeveloped. Models must be representative of the risk profile of the newly formed institution. In the case of overlap between two similar models, a new single model must be developed based on a larger data sample. This new development must occur promptly after the completion of the merger or the acquisition.
4.11.3
Institutions must pay particular attention to the integration of historical data, and future data collection, subsequent to the merger or the acquisition. This requirement applies to all data fields used as inputs to the existing models and to the future models to be developed, in particular, default rates and recovery information. Historical data time series must be reconstructed to reflect the characteristics and risk profile of the newly formed institution. Upon the implementation of the MMS, this requirement applies retroactively to cover, at a minimum, a full economic cycle in the UAE, and where possible covering the 2008 global financial crisis. Future data collection must be performed for the entire scope of the newly formed institution.
4.11.4
In the case of the disposal of an entity, a subsidiary, a branch and/or a large portfolio, institutions must ensure that the modelling framework and all the principles of model life-cycle management are adjusted to fit the needs of the reduced scope of portfolios, products, obligors and/or exposures.
5 Data Management
5.1 Data Governance
5.1.1
For the avoidance of doubt, the scope under consideration in this section includes the data employed for modelling and validation purposes, not the data employed for regular risk analysis and reporting. This section focuses on the construction of historical data sets for the purpose of modelling.
5.1.2
Accurate and representative historical data is the backbone of financial models. Institutions must implement rigorous and a comprehensive formal data management framework (“DMF”) to ensure the development of accurate models. Institutions must consider DMF as a structured process within the institution, with dedicated policies and procedures, and with the adequate amount of resources and funding. The DMF core principles are as follows:
(i) It must be approved by Senior Management and the Board, (ii) It must be thoroughly documented with indication of limitations and assumptions, (iii) Its coverage must include the whole institution and all material risk types, and (iv)
It must be independently validated.
5.1.3
The DMF must include, at a minimum, the following steps:
(i) Identification of sources, (ii) Regular and frequent collection, (iii) Rigorous data quality review and control, (iv) Secure storage and controlled access, and (v)
Robust system infrastructure. 5.1.4
The roles and responsibilities of the parties involved or contributing to the DMF must be defined and documented. Each data set or data type must have an identified owner. The owner is accountable for the timely and effective execution of the DMF steps for its data set or data type. The owner may not be responsible for performing each of the DMF steps, but she/he must remain accountable for ensuring that those are performed by other parties with high quality standards.
5.2 Identification of Data Sources
5.2.1
The DMF must include a process to identify and select relevant data sources within the institution for each type of data and model. If an institution recently merged or acquired another entity, it must carry out the necessary steps to retrieve historical data from these entities.
5.2.2
If internal sources are lacking in data quality or quantity, institutions may rely on external sources. However, if an institution decides to rely on external data for modelling, it must demonstrate that the data is relevant and suitably representative of its risk profile and its business model. External data sources must be subject to an identification and selection process. The DMF governance and quality control also apply to external data employed for modelling.
5.2.3
Once a source has been selected, institutions are expected to retain this source long enough to build consistent time series. Any change of data source for the construction of a given data set must be rigorously documented.
5.3 Data Collection
5.3.1
Each institution must collect data for the estimation of all risks arising from instruments and portfolios where it has material exposures. The data collection must be sufficiently granular to support adequate modelling. This means that data collection must be (i) sufficiently specific to be attributed to risk types and instrument types, and (ii) sufficiently frequent to allow the construction of historical time series.
5.3.2
The data collection process must cover, amongst others, credit risk, market risk (in both the trading and banking books), concentration risk, liquidity risk, operational risk, fraud risk and financial data for capital modelling. A justifiable and appropriate collection frequency must be defined for each risk type.
5.3.3
The data must be organised such that the drivers and dimensions of these risks can be fully analysed. Typical dimensions include obligor size, industries, geographies, ratings, product types, tenor and currency of exposure. For credit risk in particular, the data set must include default events and recovery events by obligor segments on a monthly basis.
5.3.4
The data collection must be documented. The data collection procedure must include clear roles and responsibilities with a maker-checker review process, when appropriate.
5.3.5
Institutions must seek to maximise automated collections and reduce manual interventions. Manual interventions must be avoided as much as possible and rigorously documented to avoid operational errors.
5.3.6
The data collection process must ensure the accuracy of metadata such as units, currencies, and date/time-stamping.
5.4 Data Quality Review
5.4.1
Prior to being used for modelling purposes, the extracted data must go through a cleaning process to ensure that data meets a required quality standard. This process must consider, at a minimum, the following data characteristics:
(i) Completeness: values are available, where needed, (ii) Accuracy: values are correct and error-free, (iii) Consistency: several sources across the institution lead to matching data, (iv) Timeliness: values are accurate as of the reporting date, (v) Uniqueness: values are not incorrectly duplicated in the same data set, and (vi)
Traceability: the origin of the data can be traced.
5.4.2
Institutions must put in place process to accomplish a comprehensive data quality review. In particular, the quality of data can be improved by, amongst others, replacing missing data points, removing errors, correcting the unit basis (thousands vs. millions, wrong currency, etc.) and reconciling against several sources.
5.4.3
Institutions must put in place tolerance levels and indicators of data quality. These indicators must be mentioned in all model documentation. Data quality reports must be prepared regularly and presented to Senior Management and the Board as part of the DMF governance, with the objective to monitor and continuously improve the quality of data over time. Considering the essential role of data quality in supporting risk management and business decisions, institutions must also consider including data quality measures in their risk appetite framework.
5.5 Data Storage and Access
5.5.1
Once a data set has been reviewed and is deemed fit for usage, it must be stored in a defined and shared location. Final data sets must not be solely stored on the computers of individual employees.
5.5.2
The access to a final data set must be controlled and restricted to avoid unwarranted modifications.
5.5.3
Appropriate measures must be taken to ensure that data is stored securely to mitigate operational risks such as cyber-attacks and physical damage.
5.6 System Infrastructure
5.6.1
Institutions must ensure that an appropriate IT system infrastructure is in place to support all the steps required by the DMF.
5.6.2
The system infrastructure must be sufficiently scalable to support the DMF requirements.
5.6.3
The system infrastructure must be in the form of strategic long-term solutions, not tactical solutions. Spreadsheet solutions must be not considered as acceptable long term solutions for data storage.
5.6.4
Employment of staff with data science knowledge and expertise is encouraged in order to undertake appropriate data management oversight.
5.6.5
Institutions must minimise key person risk related to the management of modelling data. They must ensure that several members of staff have the suitable technical expertise to fully manage data for modelling purposes.
6 Model Development
6.1.1
The development of internal models must follow a documented and structured process with sequential and logical steps, supporting the construction of the most appropriate models to meet the objectives assigned to these models. At a minimum, institutions must consider the following components. More components can be added depending on the type of model. If a component is not addressed, then clear justification must be provided.
(i) Data preparation, (ii) Data exploration (for statistical models), (iii) Data transformation, (iv) Sampling (for statistical models), (v) Choice of methodology, (vi) Model construction, (vii) Model selection, (viii) Model calibration (for statistical models), (ix) Pre-implementation validation, and (x)
Impact analysis.
6.1.2
This process must be iterative, in that, if one step is not satisfactory, some prior steps must be repeated. For instance, if no model can be successfully constructed, additional data may be needed or another methodology should be explored.
6.1.3
Each of these steps must be fully documented to enable an independent assessment of the modelling choices and their execution. This requirement is essential to support an adequate, independent model validation. Mathematical expressions must be documented rigorously to enable replication if needed.
6.1.4
For the purpose of risk models, a sufficient degree of conservatism must be incorporated in each of the development step to compensate for uncertainties. This is particularly relevant in the choice of data and the choice of methodology.
6.2 Data Preparation and Representativeness
6.2.1
Institutions must demonstrate that the data chosen for modelling is representative of the key attributes of the variables to be modelled. In particular, the time period, product types, obligor segments and geographies must be carefully chosen. The development should not proceed further if the data is deemed not representative of the variable being modelled. The institution should use a conservative buffer instead of a model, until a robust model can be built.
6.2.2
For the purpose of preparation and accurate representation, the data may need to be filtered. For instance, specific obligors, portfolios, products or time periods could be excluded in order to focus on the relevant data. Such filtering must be supported by robust documentation and governance, such that the institution is in a position to measure the impact of data filtering on model outputs. The tools and codes employed to apply filters must be fully transparent and replicable by an independent party.
6.3 Data Exploration
6.3.1
The data exploration phase must be used to confirm whether the data set is suitable for modelling purposes. The objective is to understand the nature and composition of the data set at hand and to identify expected or unusual patterns in the data. In this process, critical thinking and judgement is expected from the modelling team.
6.3.2
Descriptive statistics should be produced across both the dependent and independent variables. For instance, for credit risk modelling, such exploration is relevant to identify whether obligors have homogeneous features per segment and or market risk modelling, such exploration is relevant to assess whether the market liquidity of the underlying product is sufficient to ensure a minimum reliability of the market factor time series.
6.3.3
Institutions must clearly state the outcome of the data exploration step, that is, whether the data is fit for modelling or not. In the latter case, the development process must stop and additional suitable data must be sourced. Consequently, data unavailability must not excuse unreliable and inaccurate model output.
6.3.4
The exploration of data can lead to unusual, counterintuitive or even illogical patterns. Such features should not be immediately accepted as a mere consequence of the data. Instead, the modelling team is expected to analyse further these patterns at a lower level of granularity to understand their origin. Subsequently, either (i) the pattern should be accepted as a matter of fact, or (ii) the data should be adjusted, or (iii) the data set should be replaced. This investigation must be fully documented because it has material consequences on model calibration.
6.4 Data Transformation
6.4.1
Institutions must search for the most appropriate transformation of the dependent and the independent variables, in order to maximise the explanatory power of models. If some variables do not need to be transformed, such conclusion must be clearly stated and justified in the model development documentation.
6.4.2
The choice of variable transformation must neither be random nor coincidental. Transformations must be justified by an economic rationale. Amongst others, common transformations include (i) relative or absolute differencing between variables, (ii) logarithmic scaling, (iii) relative or absolute time change, (iv) ranking and binning, (v) lagging, and (vi) logistic or probit transformation. Quadratic and cubic transformations are possible but should be used with caution, backed by robust economic rationale, and should be used with a clear purpose in mind.
6.5 Sampling
6.5.1
For all types of statistical models, institutions must ensure that samples used for modelling are representative of the target variable to be modelled. Samples must meet minimum statistical properties to be eligible for modelling including, amongst others, a minimum size and a minimum number of data points.
6.5.2
Once a modelling data set has been identified, institutions should use sampling techniques to increase the likelihood of model stability, when possible. The sampling technique must be appropriate to the data set and a justification must be provided. Amongst others, common techniques include dividing data sets into a development sample and a validation sample.
6.6 Choice of Methodology
6.6.1
Each methodology employed for modelling must be based upon a conscious, rigorous and documented choice made under the model governance framework, and guided by the model objective. Methodology options can be suggested by third parties, however, the choice of a specific methodology remains a decision made within each institution. The ownership of a methodology must be assigned to a specific team or function within the institution, with sufficient level of seniority. The choice of methodology must be clearly stated and justified in the model development documentation.
6.6.2
The choice of methodology must be made upon comparing several options derived from common industry practice and/or relevant academic literature. Institutions must explicitly list and document the benefits and limitations of each methodology.
6.6.3
The choice of methodology must follow the following principles, which must be included in the model documentation:
(i)
Consistency: Methodologies must be consistent and comparable across the institution, across risk metrics and risk types. For instance, two similar portfolios should be subject to similar modelling approaches, unless properly justified. (ii)
Transparency: Methodologies must be clear and well-articulated to all stakeholders, including management, internal audit and the CBUAE. Mathematical formulation must be documented with all parameters clearly mentioned. (iii)
Manageability: A methodology must be chosen only if all the steps of the model life-cycle can support it. Complex methodologies must be avoided if any step of the model life-cycle cannot be performed. The choice of methodology must be based upon its ability to be implemented and successfully maintained.
6.6.4
When choosing the most suitable methodology, institutions must avoid excessive and unreasonable generalisations to compensate for a lack of data.
6.7 Model Construction
6.7.1
Statistical models:
(i)
The construction of statistical models must be based upon robust statistical techniques to reach a robust assessment of the coefficients. The statistical techniques should be chosen amongst those commonly employed in the industry for financial modelling and/or those supported by academic scientific literature. (ii)
Institutions must demonstrate that they have undertaken best efforts to understand the characteristics of the data and the nature of the relationships between the dependent and independent variables. In particular, institutions should analyse and discuss the observed correlations between variables and expected economic causations between them. Institutions should discuss the possibility of non-linear relationships and second order effects. Upon this set of analysis, a clear conclusion must be drawn in order to choose the best-suited approach for the model at hand. The analyses, reasoning and conclusions must be all documented. (iii)
Statistical indicators must be computed and reported in order to support the choice of a model. Thresholds should be explicitly chosen upfront for each statistical indicator. The indicators and associated thresholds should be justified and documented. (iv)
The implementation of statistical techniques is expected to lead to several potential candidate models. Consequently, institutions should identify candidates and rank them by their statistical performance as shown by the performance indicators. The pool of candidate models should form part of the modelling documentation. All model parameters must be clearly documented.
6.7.2
Deterministic models:
(i)
Deterministic models, such as financial forecasting models or valuation models, do not have statistical confidence intervals. Instead, the quality of their construction should be tested through (a) a set of internal consistency and logical checks and (b) comparison of the model outputs against analytically derived values. (ii)
Amongst other checks, one form of verification consists of computing the same quantity by different approaches. For instance, cash flows can be computed with a financial model through the direct or the indirect methods, which should both lead to the same results. Institutions must demonstrate and document that they have put in place a set of consistency checks as part of the development process of deterministic models. (iii)
Several deterministic models can be constructed based on a different set of assumptions. These models should constitute the pool of candidate models to be considered as part of the selection process.
6.7.3
Expert-based models:
(i)
Expert-based models, also referred to as ‘judgemental models’, must be managed according to a comprehensive life-cycle as for any other model. The construction of such models must follow a structured process, irrespective of the subjective nature of their inputs. The documentation must be sufficiently comprehensive to enable subsequent independent validations. In particular, the relationship between variables, the model logic and the rationale for modelling choices should all be documented and approved by the Model Oversight Committee. (ii)
The collection of subjective inputs must be treated as a formal data collection process. This means that the input data must be part of the DMF, with suitable quality control. Expert-based models provided by third parties must be supported by an appropriate involvement of internal subject matter experts. (iii)
Institutions are expected to develop several candidate models based on different assumptions. For all candidates, they should assess the uncertainty of the outputs, which will be a key driver of the model selection. (iv)
Institutions must be mindful of the high Model Risk associated with expert-based models. They must be in a position to justify that appropriate actions have been taken to manage such Model Risk. An additional degree of conservatism should be employed for the design, calibration and usage of expert-based models. The usage of such models for material portfolios could result into additional provisions and/or capital upon reviews from the CBAUE.
6.8 Model Selection
6.8.1
For statistical models, institutions must choose a final model amongst a pool of constructed models. Institutions must implement an explicit mechanism to filter out models and select a final model amongst several available options. It is recommended to select a main model and a challenger model up to the pre-implementation validation step. The selection of a model should include, at a minimum, the criteria outlined below. Institutions should consider all criteria together. Statistical performance should not be the only decisive factor to choose a model.
(i)
The chosen model must demonstrate adequate performance, statistical stability and robustness as shown by the statistical indicators and their thresholds. (ii)
The chosen model must be based on appropriate causal relationships, i.e. it should be constructed with variables and relationships that meet economic intuition and make logical business sense, as per the definition section of the MMS. For that purpose, causal diagrams are encouraged. (iii)
The chosen model must also lead to outcomes that meet economic intuition, can be explained easily and can support decision-making appropriately. (iv)
The chosen model must be implementable.
6.8.2
For deterministic and expert-based models, institutions must choose a final model amongst the pool of constructed models based on various assumptions. Institutions must put in place an explicit mechanism to prioritise certain assumptions and therefore choose a model amongst several candidates. In particular, the selection process should incorporate the following criteria:
(i)
The relationships between variables should be based on established causal links. The assumptions and limitations of these links should be assessed thoroughly. (ii)
The chosen model should lead to outcomes that make meet economic intuition as defined in the MMS, can be explained easily and can support decision-making appropriately. (iii)
The chosen model should be implementable.
6.9 Model Calibration
6.9.1
Model calibration is necessary to ensure that models are suitable to support business and risk decisions. Institutions must ensure that model calibration is based on relevant data that represents appropriately the characteristics and the drivers of the portfolio subject to modelling. This also applies to decisions to override or adjust inputs, coefficients and/or variables. Calibration choices must be fully documented and their assessment must also form part of the validation process. Models should be re-calibrated when deemed necessary, based on explicit numerical indicators and pre-established limits.
6.9.2
The choice of calibration requires judgement and must be closely linked to the objective of each model. In particular, the time period employed for calibration must be carefully justified depending on model types. Pricing models should be accurate. Provision models should be accurate with a degree of conservatism and should reflect the current and future economic conditions. Capital models should be conservative and reflect long term trends. Stress testing models should focus on extreme economic conditions.
6.10 Pre-implementation Validation
6.10.1
The pre-implementation validation of a model is the first independent validation that takes place after the model development. The objective of such validation must ensure that the model is fit for purpose, meets economic intuition as defined in the MMS and generates results that are assessed as reasonable by expert judgement. The depth of such validation must be defined based on model materiality and follow the institution’s model management framework. Tier 1 models must be subject to comprehensive pre-implementation validation.
6.10.2
For the qualitative review, the pre-implementation validation must cover the elements presented in Article 10.3 pertaining to the scope of the independent post-implementation validation. For the quantitative review, the pre-implementation validation must assess the model accuracy, stability and sensitivity as explained in Article 10.4.3 also pertaining to the scope of the independent post-implementation validation.
6.10.3
Institutions must document the scope, limitations and assumptions of models as part of the pre-implementation validation.
6.11 Impact Analysis
6.11.1
The objective of the impact analysis is to quantify the impact of using a newly-developed model or a newly-recalibrated model on the production of financial estimates. Where applicable, the impact analysis should be documented as part of the model development phase and reported to the Model Oversight Committee.
7 Model Implementation
7.1.1
Institutions must consider model implementation as a separate phase within the model life-cycle process. The model development phase must take into account the potential constraints of model implementation. However, successful model development does not guarantee a successful implementation. Consequently, the implementation phase must have its own set of documented and approved principles.
7.2 Project Governance
7.2.1
The implementation of a model must be treated as a project with clear governance, planning, funding, resources, reporting and accountabilities.
7.2.2
The implementation of a model must be approved by Senior Management and must only occur after the model development phase is complete and the model is fully approved.
7.2.3
The implementation project must be fully documented and, at a minimum, must include the following components:
(i) Implementation scope, (ii) Implementation plan, (iii) Roles and responsibilities of each party, (iv) Roll-back plan, and (v)
User Acceptance Testing with test cases.
7.2.4
The roles and responsibilities of the parties involved in the model implementation must be defined and documented. At a minimum, the following parties must be identified: (i) the system owner, (ii) the system users, and (iii) the project manager. All parties must be jointly responsible for the timely and effective implementation.
7.2.5
For model implementation, institutions should produce the following key documents, at a minimum:
(i)
User specification documentation: this document should specify requirements regarding the system functionalities from the perspective of users. (ii)
Functional and technical specification documentation: this document should specify the technological requirements based on the user specifications. (iii)
A roll back plan: this document should specify the process by which the implementation can be reversed, if necessary, so that the institution can rely on its previous model.
7.3 Implementation Timing
7.3.1
Institutions must be conscious that models are valid for a limited period of time. Any material delay in implementation diminishes the period during which the model can be used. Newly developed models must be implemented within a reasonable timeframe after the completion of the development phase. This timeframe must be decided upfront and fully documented in the implementation plan.
7.4 System Infrastructure
7.4.1
The IT system infrastructure must be designed to cope with the demand of the model sophistication and the volume of regular production. Institutions must assess that demand during the planning phase. Institutions should be in a position to demonstrate that the technological constraints have been assessed.
7.4.2
The IT system infrastructure should include, at a minimum, three environments: (i) development, (ii) testing, and (iii) production.
7.4.3
Institutions must have a management plan for systems failure. A system that does not comply with the business requirements must be replaced.
7.4.4
In the case of systems provided by a third party, institutions must have a contingency plan to address the risks that may arise if the third party is no longer available to support the institution.
7.4.5
If a system is designed to produce a given set of metrics, then institutions must use that system for the production and reporting of those metrics. If a system is not fit for purpose despite being implemented, institutions must not use a shadow system or a parallel system to produce the metrics that the original system was meant to produce, while maintaining the deficient original system. Instead, institutions must decommission any deficient system and fully replace it by a functioning system.
7.5 User Acceptance Testing
7.5.1
Institutions must ensure that a User Acceptance Testing (“UAT”) phase is performed as part of the system implementation plan. The objective of this phase is to ensure that the models are suitably implemented according to the agreed specifications.
7.5.2
The model implementation team must define a test plan and test cases to assess the full scope of the system functionalities, both from a technical perspective and modelling perspective. The test cases should be constructed with gradually increasing complexity. In particular, the test cases should be designed in order to assess each functionality, first independently and then jointly. The test cases should also capture extreme and erroneous inputs. Partial model replication must be used as much as possible.
7.5.3
There must be at least two (2) rounds of UAT to guarantee the correct implementation of the model. Generally, the first round is used to identify issues, while the second round is used to verify that the issues have been remediated.
7.5.4
The UAT test cases and results must be fully documented as part of the model implementation documentation. The test case inputs, results and computation replications must be stored and must be available for as long as the model is used in production.
7.5.5
Institutions must ensure that UAT tests and results are recorded and can be presented to the CBUAE, other regulators and/or auditors to assess whether a model has been implemented successfully. In particular, all rounds of UAT test cases and results must be available upon request from the CBUAE, as long as a model is used in production.
7.5.6
The UAT must be considered successful only upon the sign-off from all identified stakeholders on the UAT results. The UAT plan and results must be approved by the Model Oversight Committee.
7.5.7
Institutions must ensure that the model being implemented remains unchanged during the testing phase.
7.6 Production Testing
7.6.1
Institutions must ensure that a production testing phase is performed as part of the system implementation plan. The objective of this phase is to guarantee the robustness of the system from a technology perspective according to the functional and technical specification documentation.
7.6.2
In particular, the production testing phase must ensure that systems can cope with the volume of data in production and can run within an appropriate execution time.
7.7 Spreadsheet Implementation
7.7.1
It is not recommended that institutions use spreadsheet tools for the usage of material models and the production of metrics used for regular decision-making. More robust systems are preferred. Nevertheless, if spreadsheets are the only possible modelling environment available initially, the standards in 7.7.2 must apply, at a minimum.
7.7.2
Spreadsheet implementation should follow a quality standard as follows:
(i) The spreadsheet should be constructed with a logical flow, (ii) Formulae should be easily traceable, (iii)
Formulae should be short and constructed in a way that they are easily interpreted. It is recommended to split long formula into separate components, (iv) Tables should include titles, units and comments, (v)
Inputs should not be scattered across the sheets but they should be grouped in one worksheet/table, (vi) Hardcoded entries (i.e. fixed inputs) should be clearly identified, (vii)
Tabs should be clean, i.e. when the implementation is completed, all work in progress should be removed, (viii) Instructions should be included in one or several tabs, and (ix)
Wherever suitable, cells should be locked and worksheets protected, preferably by password.
7.7.3
Models implemented in spreadsheets that deviate from the above criteria must not be employed for regular production.
7.7.4
To ensure their robust implementation, spreadsheet tools must include consistency checks. Common consistency checks include: (i) computing the same results through different methods, (ii) ensuring that a specific set of inputs leads to the correct expected output values, and (iii) ensuring that the sensitivities of outputs to changes in inputs are matching expected values.
7.7.5
If an institution employ spreadsheets for regular production, a rigorous maker-checker process must be implemented and documented. The review of spreadsheet tools must be included in the scope of the independent validation process. In addition, a clear version control should be implemented.
8 Model Usage
8.1.1
Model usage is an integral part of model management. Model usage must be defined, documented, monitored and managed according to the following principles.
8.2 Usage Definition and Control
8.2.1
As part of the definition of model strategy and objectives, institutions must articulate and document upfront the expected usage of each model. Model usage must cover, at a minimum, the following components:
(i) The users identified either as individuals or teams, (ii) The expected frequency of model utilisation, (iii) The specific source and nature of the inputs in the production environment, (iv) The destination of the outputs in terms of IT system and operational processes, (v)
The interpretation of the outputs, that is a clear guidance on how the outputs should be used, their meaning and the decisions that they can support, (vi)
The limits of the outputs, associated uncertainty and the decisions that can be supported by the model versus those that should be supported, and (vii)
The governance of output overrides.
8.2.2
Institutions must produce indicators to actively monitor the realisation of the components in 8.2.1 and compare them against initial expectations. These must be documented and reported as part of the monitoring and validation steps of the model life-cycle.
8.2.3
Any deviation between the real usage of a model and the expected usage of a model must be documented in the monitoring and validation phases and remedied promptly, by reverting to the intended usage of the model.
8.3 Responsibilities
8.3.1
The management of model usage is shared between several parties. The model owner is responsible to define the usage of his/her models. The usage of each model should then be approved by the Model Oversight Committee. If the model owner and model user are different parties, the owner is responsible to provide documentation and training to the user. The model user must therefore follow appropriately the guidance provided by the owner.
8.3.2
The monitoring of model usage can be performed by the model owner, by the validator, or both, depending on the institution’s circumstances. Irrespective of the party performing the monitoring process, the validator must conduct an independent assessment of the appropriate usage of models as part of the validation process. For this purpose, the validator should refer to the monitoring reports, when available.
8.3.3
It may happen that the model owner has limited control over the usage of a model by other parties. In this case, the model owner is responsible to report to the Model Oversight Committee any model misuse or usage without his consent.
8.4 Input and Output Overrides
8.4.1
This section refers to all model types including, but not limited to, rating models. Manual overrides of model inputs and outputs are possible and permitted but within limits. For this purpose, institutions must put in place robust governance to manage these overrides. Such governance must be reviewed by the internal audit function. Institutions must implement limits and controls on the frequency and magnitude of overrides. Models whose input and/or outputs that are frequently and materially overridden must not be considered fit for purpose and must be recalibrated or replaced.
8.4.2
During the execution phase, input and/or output overrides must be documented, justified and approved at the appropriate authority level. When necessary, an opinion from technical subject matter experts should be produced. Overrides used by the business lines must be subject of review by the risk management function before being implemented.
8.4.3
The development and validation teams must analyse and understand the reasons for input and/or output overrides and assess whether they are caused by model weaknesses. Overrides must be tracked and reported to the Model Oversight Committee, Senior Management and the Board as part of the monitoring and validation processes.
8.4.4
If a model has been approved and is deemed suitable for production, its outputs must not be ignored. This also applies when model outputs are not meeting commercial expectations. Model outputs must be considered objectively and independently from actual, potential or perceived business line interests.
8.5 User Feedback
8.5.1
Institutions must have a process in place to ensure that model functionalities are working as expected during ongoing utilisation. The objective is to ensure that models have been designed, calibrated and implemented successfully.
8.5.2
The user feedback must cover the model functionalities, stability and consistency of output against economic and business expectations. The user feedback must be documented and reported as part of the monitoring and validation steps of the model life-cycle. If model users are different from model developers, institutions must have a process in place to collect feedback from the identified model users.
8.6 Usage of Rating Models
8.6.1
Institutions must pay particular attention to the usage of rating models due to their material impacts on financial reporting, provisions, risk decisions and business decisions. Specific policies and procedures must be designed to govern overrides, including the scope of usage, the responsibilities and the conditions of output overrides.
8.6.2
At a minimum, a rating must be assigned to each obligor on a yearly cycle. In the case of exceptional circumstances related to the obligor, the industry or the wider economy, ratings may need to be assigned more frequently.
8.6.3
Consistently with Article 8.6.2, upon the roll-out of a new rating model and/or a newly recalibrated (optimised) rating model, institutions must update client ratings as soon as possible and within a period no longer than twelve (12) months. Further details are provided in the MMG on this matter.
9 Model Performance Monitoring
9.1 Objective
9.1.1
Institutions must implement a process to monitor the performance of their models on a regular basis, as part of their model life-cycle management. The relationship between model performance and usage is asymmetric. A robust model does not guarantee relevant usage. However, an improper usage is likely to impact the model performance. Consequently, institutions must ensure that models are used appropriately prior to engaging in performance monitoring.
9.1.2
The objective of the monitoring process is to assess whether changes in the economic environment, market conditions and/or business environment have impacted the performance, stability, key assumptions and/or reliability of models.
9.1.3
Institutions must implement a documented process with defined responsibilities, metrics, limits and reports in order to assess whether models are fit for purpose, on an ongoing basis. Upon this assessment, there must be a clear decision-making process to either (i) continue monitoring or (ii) escalate for further actions.
9.2 Responsibility
9.2.1
The responsibility for the execution of model monitoring must be clearly defined. Institutions have the flexibility to assign this task to the development team, the validation team or to a third party. If model monitoring is assigned to the development team, the monitoring reports must be included in the scope of review of the independent validation process. If model monitoring is assigned to a third party, institutions remain the owners of monitoring reports and remain responsible to take appropriate actions upon the issuance of these reports. Institutions are expected to fully understand and control the content of monitoring reports produced by third party providers.
9.2.2
Monitoring reports must be presented regularly to the Model Oversight Committee. All reports containing limit breaches of monitoring metrics must be discussed by the committee.
9.2.3
The internal audit function must verify that model monitoring is performed appropriately by the assigned party. In particular, the internal audit function must review the relevance, frequency and usability of the monitoring reports.
9.3 Frequency
9.3.1
Model monitoring must be undertaken on a frequent basis and documented as part of the model life-cycle management. Institutions must demonstrate that the monitoring frequency is appropriate for each model. The minimum frequency is indicated in the Article (10) of the MMS, which covers the independent validation process.
9.4 Metrics and Limits
9.4.1
Institutions must develop metrics and limits to appropriately track model performance. The metrics must be carefully designed to capture the model performance based on its specific characteristics and its implementation. At a minimum, the monitoring metrics must capture the model accuracy and stability as explained in Article 10.4.3 pertaining to the scope of the post-implementation validation. In addition, the monitoring metrics must track the model usage to assess whether the model is used as intended.
9.5 Reporting and Decision-Making
9.5.1
Institutions must implement a regular process to report the results of model monitoring to the Model Oversight Committee, the CRO and to the model users.
9.5.2
Reports must be clear and consistent through time. For each model, monitoring metrics must be included along with their respective limits. Times series of the metrics should be provided in order to appreciate their volatility and/or stability through time and therefore help make a view on the severity of limit breaches. Explanations on the nature and meaning of each metric must be provided, in such a way that the report can be understood by the members of the Model Oversight Committee and by auditors.
9.5.3
Regardless of the party responsible for model monitoring, all reports must be circulated to both the development team and the independent validation team, as soon as they are produced. For some models, monitoring reports can also be shared with the model users.
9.5.4
In each report, explanations on the significance of limit breaches must be provided. Sudden material deterioration of model performance must be discussed promptly between the development team and the validation team. If necessary, such deterioration must be escalated to the Model Oversight Committee and the CRO outside of the scheduled steps of the model life-cycle. The Committee and/or the CRO may decide to suspend the usage of a model or accelerate the model review upon the results of the monitoring process.
9.5.5
Institutions must define the boundaries of model usage. These are the limits and conditions upon which a model is immediately subject to adjustments, increased margins of conservatism, exceptional validation and/or suspension. Specific triggers must be clearly employed to identify abnormalities in model outputs.
10 Independent Validation
10.1 Objective and Scope
10.1.1
The independent validation of models is a key step of their life-cycle management. The objective is to undertake a comprehensive review of models in order to assess whether they are performing as expected and in line with their designed objective. While monitoring and validation are different processes run at different frequencies, the content of the monitoring process forms a subset of the broader scope covered by the validation process. Therefore, when available, the results of the monitoring process must be used as inputs into the validation process.
10.1.2
Institutions must put in place a rigorous process with defined responsibilities, metrics, limits and reporting in order to meet the requirements of independent model validation. Part of the metrics must be common between the monitoring process and the validation process. Independent validation must be applied to all models including statistical models, deterministic models and expert-based models whether they have been developed internally or acquired from a third party provider.
10.1.3
The validation scope must cover both a qualitative validation and a quantitative validation. Both validation approaches complement each other and must not be considered separately. A qualitative validation alone is not sufficient to be considered as a complete validation since it does not constitute an appropriate basis on which modelling decisions can be made. If insufficient data is available to perform the quantitative validation, the validation process should be flagged as incomplete to the Model Oversight Committee, which should then make a decision regarding the usage of the model in light of the uncertainty and the Model Risk associated with such partially validated model.
10.1.4
The scope of the validation must be comprehensive and clearly stated. The scope must include all relevant model features that are necessary to assess whether the model produces reliable outputs to meet its objectives. If a validation is performed by a third party, institutions must ensure that the validation scope is comprehensive. It may happen that an external validator cannot fully assess all relevant aspects of a model for valid reasons. In this case, institutions are responsible to perform the rest of the validation and to ensure that the scope is complete.
10.1.5
A validation exercise must result in an independent judgement with a clear conclusion regarding the suitability of the model. A mere description of the model features and performance does not constitute a validation. Observations must be graded according to an explicit scale including, but not limited to, ‘high severity’, ‘medium severity’ and ‘low severity’. The severity of model findings must reflect the degree of uncertainty surrounding the model outputs, independently of the model materiality, size or scope. As a second step, this degree of uncertainty should be used to estimate Model Risk, since the latter is defined as the combination of model uncertainty and materiality.
10.1.6
In addition to the finding severity, institutions must create internal rating scales to assess the overall performance of each model. This performance rating should be a key input in the decision process in each model step of the model life-cycle.
10.2 Responsibilities
10.2.1
Institutions must put in place a rigorous process to ensure that models are independently validated either by an internal dedicated team or by a third party provider, or both. If model validation is assigned to a third party, institutions remain the owners of validation reports and must take appropriate action upon the issuance of these reports.
10.2.2
In order to ensure its independence and efficiency, the party responsible for model validation (“validator”) must be able to demonstrate all the following characteristics. If the validator does not possess all of those, the validation reports must not be considered independent and/or robust enough and therefore must not be used for decision-making.
(i) Advanced understanding of model methodologies and validation techniques, that is sufficiently mature to allow the formulation of independent judgement. (ii)
The expertise and freedom to express, hold and defend views that are different from the development team and from management. The ability to present those views to the Model Oversight Committee, Senior Management and the Board. (iii)
The ability to perform independent research and articulate alternative proposals.
10.2.3
The internal audit function is responsible to verify that model validation is performed appropriately by the assigned party, following a regular audit cycle. At a minimum, the audit function must cover the following scope:
(i)
Review the governance surrounding the internal validation process and assess its independence in light of the MMS. (ii)
Form a view regarding the suitability of the depth and scope of the work performed by the validation team, also in light of the MMS. (iii)
Review the relevance, frequency and effectiveness of the validation process. At a minimum, the auditor must review the list of findings issued by the validator and assess if the timing necessary for remediation is appropriate.
10.2.4
The internal audit function should employ third party experts to assist on technical matters until it can demonstrate that it can perform an adequate review of the model validation process without technical support. If the audit team employs supporting experts, it remains the sole owner of the conclusions of the audit report.
10.3 Qualitative Validation
10.3.1
The independent validation process must include a review of the model conceptual soundness, design and suitability of the development process. The scope of the qualitative validation varies depending on the statistical or deterministic nature of the model. This must include, at a minimum, a review of the following elements:
(i) The model governance and decision process, (ii) The model conceptual soundness, purpose and scope, (iii) The methodology including the mathematical construction, (iv)
The suitability of the output in terms of economic intuition and business sense as defined in the MMS, and (v) The suitability of the implementation (when the model is implemented) In addition, for statistical models: (vi) The choice of variables and their respective transformation, (vii) The suitability of the data in terms of sources, filters and time period, and (viii)
The suitability of the sampling techniques, if any.
10.4 Quantitative Validation
10.4.1
The quantitative validation must assess the suitability of the model output with respect to the objective initially assigned to the model. This process must rely on numerical analyses to derive its conclusions. Such validation should include a set of dedicated research to arrive at an independent judgement. Under certain circumstances, partial model replication and/or a challenger model may be necessary to form a judgement.
10.4.2
The set of metrics employed for model validation must at least include those employed for monitoring. As a first step, the validator must make a review of the monitoring reports and their observations. In addition, institutions should employ a broader spectrum of performance metrics to fully assess model performance, since the scope of the validation process is larger than that of monitoring.
10.4.3
The assessment of model performance must cover, at a minimum, the following components, applicable to both statistical and deterministic models:
(i)
Accuracy and conservatism: The ability of a model to generate predictions that are close to the realised values, observed before and after the model development phase. For models whose results are subject to material uncertainty, the validator should assess if sufficient conservatism included in the model calibration. (ii)
Stability and robustness: Whilst there are theoretical differences between stability and robustness, for the purpose of this MMS, this refers to the ability of a model to withstand perturbations, i.e. maintain its accuracy despite variability in its inputs or when the modelling assumptions are not fully satisfied. In particular, this means the ability of a model to generate consistent and comparable results through time. (iii)
Controlled sensitivity: This relates to the model construction. Model sensitivity refers to the relationship between a change in the model inputs and the observed change in the model results. The sensitivity of the output to a change in inputs must be logical, fully understood and controlled.
10.4.4
The quantitative validation process should include a review of the suitability, relevance and accuracy of following components.
For both statistical and deterministic models: (i) The implementation, (ii) The adjustments and scaling factors, if any, (iii) The ‘hard-coded’ rules and mappings, (iv) The extrapolations and interpolations, if any, and (v) The sensitivities to changes in inputs, In addition for statistical models only: (vi) The model coefficients, (vii) The statistical accuracy of the outputs, (viii) The raw data as per the DMF requirements, and (ix) The historical time series, In addition, for deterministic models only: (x) A decomposition of the model drivers and their associated sensitivity, and (xi)
A partial replication, when possible.
10.5 Review Frequency
10.5.1
All models must be validated at regular frequencies appropriate to model types and tiers. The review periods should not be longer than the ones presented in Table 2 below. More frequent reviews can be implemented at the discretion of institutions, depending on model types and complexity. More frequent reviews may also be necessary in the case of unforeseen circumstances, for instance related to changes in model usage and/or changes in the economic environment. Less frequent reviews are possible in certain circumstances, but they should be justified and will be subject to assessment from the CBUAE.
10.5.2
The dates corresponding to the last monitoring and validation exercises must be tracked rigorously, included in the model inventory and reported to the Model Oversight Committee at least every quarter. The internal audit function must ensure that this process is implemented effectively by the model owner and the validator.
Table 2: Minimum monitoring and validation frequencies for most common models
Tier 1 models Tier 2 models Portfolio Model Type Monitoring Validation Monitoring Validation Wholesale Rating 1 year 3 years 2 years 5 years Wholesale PD term structure 1 year 3 years 2 years 5 years Wholesale Macro-PD 1 year 2 years 2 years 3 years Wholesale LGD 1 year 3 years 2 years 5 years Wholesale Macro-LGD 1 year 2 years 2 years 3 years Retail Scorecard 3 months 1 year 6 months 3 years Retail PD term structure 1 year 2 years 2 years 3 years Retail Macro-PD 1 year 2 years 2 years 3 years Retail LGD 1 year 2 years 2 years 3 years Retail Macro-LGD 1 year 2 years 2 years 3 years EAD EAD 1 month 3 years 2 years 5 years Trading Book VaR and related models 3 months 3 years* 6 months 4 years* Trading Book Exposure and xVA 1 year 3 years* 6 months 4 years* Multiple Valuation 1 year 3 years* n/a 4 years* Multiple Concentration 1 year 3 year** n/a n/a Multiple IRRBB 1 year 3 year** n/a n/a Multiple Other Pillar II models 1 year 3 year** n/a n/a Multiple Capital forecasting 1 year 3 year** n/a n/a Multiple Liquidity 1 year 3 year** n/a n/a 10.5.3
Where [*] is indicated in table 2 above: For pricing and traded risk models such as VaR, exposure and xVA models, a distinction should be made between (i) the model mechanics, (ii) the calibration and (iii) the associated market data. The mechanics should be reviewed at least every 3 to 4 years ; however the suitability of the calibration and the market data should be reviewed more frequently as part of the model monitoring process. In addition to these frequencies, any exceptional market volatility should trigger a revision of all model decisions.
10.5.4
Where [**] is indicated in table 2 above: For deterministic models such as capital forecasting, concentration and IRRBB models, a distinction should also be made between (i) the model mechanics and (ii) the input data. Whilst the mechanics (methodology and system) can be assessed every 3 years, the calibration must be reviewed yearly in order to assess the appropriate usage of the model with a new set of inputs. This yearly frequency is motivated by the strategic usage of such models in the ICAAP.
10.5.5
For models other than those mentioned in table 2 above, institutions must establish a schedule for monitoring and validation that is coherent with their nature and their associated Model Risk.
10.6 Reporting of Findings
10.6.1
The analyses and tests performed during the validation of a model must be rigorously documented in a validation report. Validation reports must be practical, action orientated, focused on findings and avoid unnecessary theoretical digressions. A validation report should include, at a minimum, the following components:
(i) The model reference number, nomenclature, materiality and classification, (ii) The implementation date, the monitoring dates and the last validation date, if any, (iii) A clear list of findings with their associated severity, (iv) Suggestions for remediation, when appropriate, (v) The value of each performance indicator with its associated limit, (vi) The results of the qualitative review as explained above, (vii) The results of the quantitative review as explained above, (viii) The model risk rating, and (ix)
A conclusion regarding the overall performance.
10.6.2
The model validation report must refer to the steps of the model life-cycle. Its conclusion should be one of the following possible outcomes, as mentioned in the model governance section:
(i) Leave the model unchanged, (ii) Use a temporary adjustment while establishing a remediation plan, (iii) Recalibrate the model, (iv) Redevelop a new model, or (v)
Withdraw the model without further redevelopment.
10.7 Remediation Process
10.7.1
Institutions must put in place effective processes to manage observations and findings arising from independent validation exercises. The remediation process must be structured and fully documented in the institution’s policy. The findings need to be clearly recorded and communicated to all model stakeholders including, at least, the development team, the members of the Model Oversight Committee and Senior Management. The members of the committee must agree on a plan to translate the findings into actionable items which must be addressed in a timely fashion.
10.7.2
If an institution decides not to address some model defects, it must identify, assess and report the associated Model Risk. It must also consider retiring and/or replacing the model or implement some other remediation plan. Such decision may result in additional provisions and/or capital buffers and will be subject to review by the CBUAE.
10.7.3
Upon completion, the validation report must be discussed between the validator and the development team, with the objective to reach a common understanding of the model weaknesses and their associated remediation. Both parties are expected to reach a conclusion on the validation exercise, its outcomes and its remediation plan. The following must be considered:
(i)
The views expressed by both parties must be technical, substantiated and documented. The development team and/or the model owner should provide a response to all the observations and findings raised by the validator. (ii)
The views expressed by both parties must aim towards a practical resolution, with the right balance between theoretical requirements vs. practical constraints. (iii)
The resolution of modelling defects must be based on minimising the estimated Model Risk implicit in each remediation option. (iv)
Outstanding divergent views between both parties should be resolved by the Model Oversight Committee.
10.7.4
For each finding raised by the validator, the following must be submitted to the Model Oversight Committee for consideration: (i) substantiated evidence from the validator, (ii) the opinion of the development team, (iii) a suggested remediation, if deemed necessary, and (iv) a remediation date, if applicable. The Model Oversight Committee must decide to proceed with one of the options listed in the Article 10.6.2 above. When making a choice amongst the various options, the Committee must consider their respective Model Risk and associated financial implications.
10.7.5
The validator must keep track of the findings and remediating actions and report them to the Model Oversight Committee and Senior Management on a quarterly basis, and to the Board (or to a specialised body of the Board) on a yearly basis. Such status reports must cover all models and present the outstanding Model Risk. The reports must be reviewed by the internal audit function as part of their audit review. Particular attention should be given to repeated findings from one validation to the next.
10.7.6
If the institution does not have an internal validation team, then reporting of model findings and remediation can be performed by another function within the institution. However, the internal audit function must regularly review the reporting process to ensure that such reporting is an accurate representation of the status of model performance.
10.7.7
Institutions must aim to resolve model findings promptly in order to mitigate Model Risk. For that purpose, institutions must develop a process to manage defect remediation effectively. This process must include the following principles:
(i)
High severity findings must be addressed immediately with tactical solutions, irrespective of the model Tier. Such solutions can take the form of temporary adjustment, overlay and/or scaling in order to reduce the risk of inaccurate model outputs and introduce a degree of conservatism. Tactical solutions must not become permanent, must be associated with an expiration date and must cease after the implementation of permanent remediation. (ii)
Institutions must establish maximum remediation periods per finding severity, per model Tier and per model type. The remediation period must start from the date at which the Model Oversight Committee reaches an agreement on the nature and severity of the finding. For findings requiring urgent attention, an accelerated approval process must be put in place to start remediation work. (iii)
Tactical solutions must only be temporary in nature and institutions should aim to fully resolve high severity findings within six (6) months. At a maximum, high severity findings must be resolved no later than twelve (12) months after their identification. High severity findings, not resolved within 6 months must be reported to the Board and to the CBUAE. (iv)
When establishing maximum remediation periods, institutions must take into account model types in order to mitigate Model Risk appropriately. For instance, defects related to market risk / pricing models should be remedied within weeks, while defect remediation for rating models could take longer. (v)
For each defect, a clear plan must be produced in order to reach timely remediation. Priority should be given to models with greater financial impacts. The validator should express its view on the timing and content of the plan, and the remediation plan should be approved by the Model Oversight Committee.
10.7.8
At the level of the institution, the timing for finding resolution is a reflection of the effectiveness of the validation process and the ability of the institution to manage Model Risk. This will be subject to particular attention from the CBUAE. Exceptions to the time frame defined by institutions must be formally approved by Senior Management upon robust justification and will be reviewed by the CBUAE as part of regular supervision.