6 Model Development
6.1.1
The development of internal models must follow a documented and structured process with sequential and logical steps, supporting the construction of the most appropriate models to meet the objectives assigned to these models. At a minimum, institutions must consider the following components. More components can be added depending on the type of model. If a component is not addressed, then clear justification must be provided.
(i) Data preparation, (ii) Data exploration (for statistical models), (iii) Data transformation, (iv) Sampling (for statistical models), (v) Choice of methodology, (vi) Model construction, (vii) Model selection, (viii) Model calibration (for statistical models), (ix) Pre-implementation validation, and (x)
Impact analysis.
6.1.2
This process must be iterative, in that, if one step is not satisfactory, some prior steps must be repeated. For instance, if no model can be successfully constructed, additional data may be needed or another methodology should be explored.
6.1.3
Each of these steps must be fully documented to enable an independent assessment of the modelling choices and their execution. This requirement is essential to support an adequate, independent model validation. Mathematical expressions must be documented rigorously to enable replication if needed.
6.1.4
For the purpose of risk models, a sufficient degree of conservatism must be incorporated in each of the development step to compensate for uncertainties. This is particularly relevant in the choice of data and the choice of methodology.
6.2 Data Preparation and Representativeness
6.2.1
Institutions must demonstrate that the data chosen for modelling is representative of the key attributes of the variables to be modelled. In particular, the time period, product types, obligor segments and geographies must be carefully chosen. The development should not proceed further if the data is deemed not representative of the variable being modelled. The institution should use a conservative buffer instead of a model, until a robust model can be built.
6.2.2
For the purpose of preparation and accurate representation, the data may need to be filtered. For instance, specific obligors, portfolios, products or time periods could be excluded in order to focus on the relevant data. Such filtering must be supported by robust documentation and governance, such that the institution is in a position to measure the impact of data filtering on model outputs. The tools and codes employed to apply filters must be fully transparent and replicable by an independent party.
6.3 Data Exploration
6.3.1
The data exploration phase must be used to confirm whether the data set is suitable for modelling purposes. The objective is to understand the nature and composition of the data set at hand and to identify expected or unusual patterns in the data. In this process, critical thinking and judgement is expected from the modelling team.
6.3.2
Descriptive statistics should be produced across both the dependent and independent variables. For instance, for credit risk modelling, such exploration is relevant to identify whether obligors have homogeneous features per segment and or market risk modelling, such exploration is relevant to assess whether the market liquidity of the underlying product is sufficient to ensure a minimum reliability of the market factor time series.
6.3.3
Institutions must clearly state the outcome of the data exploration step, that is, whether the data is fit for modelling or not. In the latter case, the development process must stop and additional suitable data must be sourced. Consequently, data unavailability must not excuse unreliable and inaccurate model output.
6.3.4
The exploration of data can lead to unusual, counterintuitive or even illogical patterns. Such features should not be immediately accepted as a mere consequence of the data. Instead, the modelling team is expected to analyse further these patterns at a lower level of granularity to understand their origin. Subsequently, either (i) the pattern should be accepted as a matter of fact, or (ii) the data should be adjusted, or (iii) the data set should be replaced. This investigation must be fully documented because it has material consequences on model calibration.
6.4 Data Transformation
6.4.1
Institutions must search for the most appropriate transformation of the dependent and the independent variables, in order to maximise the explanatory power of models. If some variables do not need to be transformed, such conclusion must be clearly stated and justified in the model development documentation.
6.4.2
The choice of variable transformation must neither be random nor coincidental. Transformations must be justified by an economic rationale. Amongst others, common transformations include (i) relative or absolute differencing between variables, (ii) logarithmic scaling, (iii) relative or absolute time change, (iv) ranking and binning, (v) lagging, and (vi) logistic or probit transformation. Quadratic and cubic transformations are possible but should be used with caution, backed by robust economic rationale, and should be used with a clear purpose in mind.
6.5 Sampling
6.5.1
For all types of statistical models, institutions must ensure that samples used for modelling are representative of the target variable to be modelled. Samples must meet minimum statistical properties to be eligible for modelling including, amongst others, a minimum size and a minimum number of data points.
6.5.2
Once a modelling data set has been identified, institutions should use sampling techniques to increase the likelihood of model stability, when possible. The sampling technique must be appropriate to the data set and a justification must be provided. Amongst others, common techniques include dividing data sets into a development sample and a validation sample.
6.6 Choice of Methodology
6.6.1
Each methodology employed for modelling must be based upon a conscious, rigorous and documented choice made under the model governance framework, and guided by the model objective. Methodology options can be suggested by third parties, however, the choice of a specific methodology remains a decision made within each institution. The ownership of a methodology must be assigned to a specific team or function within the institution, with sufficient level of seniority. The choice of methodology must be clearly stated and justified in the model development documentation.
6.6.2
The choice of methodology must be made upon comparing several options derived from common industry practice and/or relevant academic literature. Institutions must explicitly list and document the benefits and limitations of each methodology.
6.6.3
The choice of methodology must follow the following principles, which must be included in the model documentation:
(i)
Consistency: Methodologies must be consistent and comparable across the institution, across risk metrics and risk types. For instance, two similar portfolios should be subject to similar modelling approaches, unless properly justified. (ii)
Transparency: Methodologies must be clear and well-articulated to all stakeholders, including management, internal audit and the CBUAE. Mathematical formulation must be documented with all parameters clearly mentioned. (iii)
Manageability: A methodology must be chosen only if all the steps of the model life-cycle can support it. Complex methodologies must be avoided if any step of the model life-cycle cannot be performed. The choice of methodology must be based upon its ability to be implemented and successfully maintained.
6.6.4
When choosing the most suitable methodology, institutions must avoid excessive and unreasonable generalisations to compensate for a lack of data.
6.7 Model Construction
6.7.1
Statistical models:
(i)
The construction of statistical models must be based upon robust statistical techniques to reach a robust assessment of the coefficients. The statistical techniques should be chosen amongst those commonly employed in the industry for financial modelling and/or those supported by academic scientific literature. (ii)
Institutions must demonstrate that they have undertaken best efforts to understand the characteristics of the data and the nature of the relationships between the dependent and independent variables. In particular, institutions should analyse and discuss the observed correlations between variables and expected economic causations between them. Institutions should discuss the possibility of non-linear relationships and second order effects. Upon this set of analysis, a clear conclusion must be drawn in order to choose the best-suited approach for the model at hand. The analyses, reasoning and conclusions must be all documented. (iii)
Statistical indicators must be computed and reported in order to support the choice of a model. Thresholds should be explicitly chosen upfront for each statistical indicator. The indicators and associated thresholds should be justified and documented. (iv)
The implementation of statistical techniques is expected to lead to several potential candidate models. Consequently, institutions should identify candidates and rank them by their statistical performance as shown by the performance indicators. The pool of candidate models should form part of the modelling documentation. All model parameters must be clearly documented.
6.7.2
Deterministic models:
(i)
Deterministic models, such as financial forecasting models or valuation models, do not have statistical confidence intervals. Instead, the quality of their construction should be tested through (a) a set of internal consistency and logical checks and (b) comparison of the model outputs against analytically derived values. (ii)
Amongst other checks, one form of verification consists of computing the same quantity by different approaches. For instance, cash flows can be computed with a financial model through the direct or the indirect methods, which should both lead to the same results. Institutions must demonstrate and document that they have put in place a set of consistency checks as part of the development process of deterministic models. (iii)
Several deterministic models can be constructed based on a different set of assumptions. These models should constitute the pool of candidate models to be considered as part of the selection process.
6.7.3
Expert-based models:
(i)
Expert-based models, also referred to as ‘judgemental models’, must be managed according to a comprehensive life-cycle as for any other model. The construction of such models must follow a structured process, irrespective of the subjective nature of their inputs. The documentation must be sufficiently comprehensive to enable subsequent independent validations. In particular, the relationship between variables, the model logic and the rationale for modelling choices should all be documented and approved by the Model Oversight Committee. (ii)
The collection of subjective inputs must be treated as a formal data collection process. This means that the input data must be part of the DMF, with suitable quality control. Expert-based models provided by third parties must be supported by an appropriate involvement of internal subject matter experts. (iii)
Institutions are expected to develop several candidate models based on different assumptions. For all candidates, they should assess the uncertainty of the outputs, which will be a key driver of the model selection. (iv)
Institutions must be mindful of the high Model Risk associated with expert-based models. They must be in a position to justify that appropriate actions have been taken to manage such Model Risk. An additional degree of conservatism should be employed for the design, calibration and usage of expert-based models. The usage of such models for material portfolios could result into additional provisions and/or capital upon reviews from the CBAUE.
6.8 Model Selection
6.8.1
For statistical models, institutions must choose a final model amongst a pool of constructed models. Institutions must implement an explicit mechanism to filter out models and select a final model amongst several available options. It is recommended to select a main model and a challenger model up to the pre-implementation validation step. The selection of a model should include, at a minimum, the criteria outlined below. Institutions should consider all criteria together. Statistical performance should not be the only decisive factor to choose a model.
(i)
The chosen model must demonstrate adequate performance, statistical stability and robustness as shown by the statistical indicators and their thresholds. (ii)
The chosen model must be based on appropriate causal relationships, i.e. it should be constructed with variables and relationships that meet economic intuition and make logical business sense, as per the definition section of the MMS. For that purpose, causal diagrams are encouraged. (iii)
The chosen model must also lead to outcomes that meet economic intuition, can be explained easily and can support decision-making appropriately. (iv)
The chosen model must be implementable.
6.8.2
For deterministic and expert-based models, institutions must choose a final model amongst the pool of constructed models based on various assumptions. Institutions must put in place an explicit mechanism to prioritise certain assumptions and therefore choose a model amongst several candidates. In particular, the selection process should incorporate the following criteria:
(i)
The relationships between variables should be based on established causal links. The assumptions and limitations of these links should be assessed thoroughly. (ii)
The chosen model should lead to outcomes that make meet economic intuition as defined in the MMS, can be explained easily and can support decision-making appropriately. (iii)
The chosen model should be implementable.
6.9 Model Calibration
6.9.1
Model calibration is necessary to ensure that models are suitable to support business and risk decisions. Institutions must ensure that model calibration is based on relevant data that represents appropriately the characteristics and the drivers of the portfolio subject to modelling. This also applies to decisions to override or adjust inputs, coefficients and/or variables. Calibration choices must be fully documented and their assessment must also form part of the validation process. Models should be re-calibrated when deemed necessary, based on explicit numerical indicators and pre-established limits.
6.9.2
The choice of calibration requires judgement and must be closely linked to the objective of each model. In particular, the time period employed for calibration must be carefully justified depending on model types. Pricing models should be accurate. Provision models should be accurate with a degree of conservatism and should reflect the current and future economic conditions. Capital models should be conservative and reflect long term trends. Stress testing models should focus on extreme economic conditions.
6.10 Pre-implementation Validation
6.10.1
The pre-implementation validation of a model is the first independent validation that takes place after the model development. The objective of such validation must ensure that the model is fit for purpose, meets economic intuition as defined in the MMS and generates results that are assessed as reasonable by expert judgement. The depth of such validation must be defined based on model materiality and follow the institution’s model management framework. Tier 1 models must be subject to comprehensive pre-implementation validation.
6.10.2
For the qualitative review, the pre-implementation validation must cover the elements presented in Article 10.3 pertaining to the scope of the independent post-implementation validation. For the quantitative review, the pre-implementation validation must assess the model accuracy, stability and sensitivity as explained in Article 10.4.3 also pertaining to the scope of the independent post-implementation validation.
6.10.3
Institutions must document the scope, limitations and assumptions of models as part of the pre-implementation validation.
6.11 Impact Analysis
6.11.1
The objective of the impact analysis is to quantify the impact of using a newly-developed model or a newly-recalibrated model on the production of financial estimates. Where applicable, the impact analysis should be documented as part of the model development phase and reported to the Model Oversight Committee.