Skip to main content Big Data Analytics and Artificial Intelligence (AI)
Materiality
- 3.92Institutions should assess their Big Data Analytics and AI Applications to determine the materiality and associated risks of each Application.
- 3.93When conducting a materiality assessment of a Big Data Analytics and AI Application, Institutions should consider:
- a.The purpose of the Big Data Analytics and AI Application (i.e. use case) and its role in the Institution’s decision-making process;
- b.The criticality and inherent risk profile of the activities (i.e. are they activities that are critical to the business continuity/viability of the Institution and its obligations to Customers); and
- c.The likelihood that the activity may be disrupted and the impact of any such disruption.
Governance
- 3.94Institutions should establish an approved and documented governance framework for effective decision-making and proper management and control of risks arising from the use of Big Data Analytics and AI. The governance framework should:
- a.Establish a mechanism to ensure that Institutions are required to assess whether the Application is suitable for Big Data Analytics and AI implementation and define specific parameters and criteria to enable the Institution in its decision-making;
- b.Establish appropriate policies, procedures and controls to govern the design, development, monitoring, review and use of Big Data Analytics and AI Applications within the Institution;
- c.Ensure proper validation of Big Data Analytics and AI Applications prior to their launch, and thereafter implement on-going training, calibration and review to ensure the reliability, fairness, accuracy and relevance of the algorithms, models and Data used and the results;
- d.Maintain a transparent, enterprise-wide record of Big Data Analytics and AI Applications and their underlying mechanics;
- e.Establish processes to assess, monitor, report and mitigate risks associated with the Big Data Analytics and AI Application;
- f.Ensure that material decisions regarding Big Data Analytics and AI Applications and their underlying models and Data are documented and sufficiently justified; and
- g.Cover every stage of the model lifecycle including design, development, deployment, review, update and discontinuation.
- 3.95The Governing Body and Senior Management of the Institution should be accountable for the outcomes and decisions arising from the use of Big Data Analytics and AI Applications, including those Applications that make decisions on behalf of the Institution. They should:
- a.Ensure that all Staff working on or using Big Data Analytics and AI Applications are assigned appropriate accountability for their involvement with Big Data Analytics and AI Applications and understand what they should to do meet this accountability; and
- b.Ensure that technical specialists with appropriate technology skillsets (e.g. Big Data analysts, Artificial Intelligence engineers and specialists) and Application specific skillsets (e.g. credit risk modelling specialists if the Application is a credit scoring model) form part of the team actively involved in developing and implementing Big Data Analytics and AI Applications.
- 3.96When Outsourcing to an Outsourcing Service Provider, Institutions should ensure that access to information is adequately controlled, monitored, reviewed, and audited by the Institution’s internal control functions, and regulators, or persons employed by them, including supervisory reviews by the respective Supervisory Authority.
- 3.97Big Data Analytics and AI Applications, including when the model is developed by an Outsourcing Service Provider, should be auditable and, accordingly, Institutions, where relevant and considering the type of application used, should maintain on-going and up-to-date information through:
- a.Establishing audit logs and maintaining traceability of decisions and outcomes of the Big Data Analytics and AI Application;
- b.Developing and maintaining design documentation (further guidance provided in Clause Institutions should maintain documentation outlining the design of the material Big Data Analytics and AI model including but not limited to, where applicable:);
- c.Maintaining records of the various versions of the model including its code (further guidance provided in Clause Institutions should establish a robust system for versioning and maintain record of each version of the material Big Data Analytics and AI model including but not limited to, where applicable:);
- d.Archiving original Datasets used to develop, re-train or calibrate models;
- e.Tracking outcomes and performance of the Big Data Analytics and AI Application; and
- f.Retaining above information for a minimum period of five (5) years, or as otherwise prescribed by applicable laws and regulations.
Design
- 3.98Institutions should ensure that the models for their Big Data Analytics and AI Applications are reliable, transparent, and explainable, commensurate with the materiality of those Applications. Accordingly, Institutions, where appropriate, should consider:
- a.Reliability: Implementing measures to ensure material Big Data Analytics and AI Applications are reliable and accurate, behave predictably, and operate within the boundaries of applicable rules and regulations, including any laws on data protection or cyber security;
- b.Transparency: Institutions should be transparent in how they use Big Data Analytics and AI in their business processes, and (where reasonably appropriate) how the Big Data Analytics and AI Applications function; and
- c.Technical Clarity: Implementing measures to ensure the technical processes and decisions of a Big Data Analytics and AI model can be easily interpreted and explained to avoid the threat of “black-box” models. The level of technical clarity should be appropriate and commensurate with the purpose and materiality of the Big Data Analytics and AI Application (e.g. where the model results have significant implications on decision making).
- 3.99Institutions should adopt an effective Data governance framework to ensure that Data used by the material Big Data Analytics and AI model is accurate, complete, consistent, secure, and provided in a timely manner for the Big Data Analytics and AI Application to function as designed. The framework should document the extent to which the Data meets the Institution’s requirements for data quality, gaps in data quality that may exist and steps the Institution will take, where possible, to resolve these gaps over time.
- 3.100Institutions should make regular efforts to ensure data used to train the material Big Data Analytics and AI model is representative (i.e. how relevant the Data and inferences drawn from the Data are to the Big Data Analytics and AI Application) and produces predictable, reliable outcomes that meet objectives.
- 3.101Institutions should be able to promptly suspend material Big Data Analytics and AI Applications upon the Institution’s discretion such as in the event of a high cyber threat, information security breach or malfunctioning of the model.
- 3.102Institutions should, where relevant, conduct rigorous, independent validation and testing of material trained Big Data Analytics and AI models to ensure the accuracy, appropriateness, and reliability of the models prior to deployment. Institutions should ensure the model is reviewed to identify any unintuitive or false causal relationships. The validation may be carried out by an independent function within the Institution or by an external organisation.
- 3.103Institutions should maintain documentation outlining the design of the material Big Data Analytics and AI model including but not limited to, where applicable:
- a.The input Data source and Data description (types and use of Data);
- b.The Data quality checks and Data transformations conducted;
- c.Reasons and justifications for specific model design and development choices;
- d.Methodology or numerical analyses and calculations conducted;
- e.Results and expected outcomes;
- f.Quantitative evaluation and testing metrics used to determine soundness of the model and its results;
- g.Model usage and implementation;
- h.Form and frequency of model validation, monitoring and review; and
- i.Assumptions or limitations of the model with justifications.
- 3.104Institutions should introduce controls to ensure confidentiality and integrity of the codes used in the material Big Data Analytics and AI Application so that the code is only accessed and altered by authorized persons.
- 3.105Institutions should identify and monitor the unique risks arising from use of the material Big Data Analytics and AI Application and establish appropriate controls to mitigate those risks.
Management and Monitoring
- 3.106Institutions should establish an approved and documented framework to review the reliability, fairness, accuracy and relevance of the algorithms, models and Data used prior to deployment of a material Big Data Analytics and AI Application and on a periodic basis after deployment, to verify that the models are behaving as designed and intended. The framework should cover, where relevant:
- a.The various types and frequencies of reviews including continuous monitoring, re-training, calibration and validation;
- b)Scenarios and criteria that would trigger a re-training, calibration, re-development or discontinuation of the model such as a significant change in input Data or external/economic changes;
- a.Review of material Big Data Analytics and AI model outcomes for fairness or unintentional bias (e.g. through monitoring and analysis of false positive and/or false negative rates); and
- b.Review of continuity or contingency measures such as human intervention or the use of conventional processes (i.e. that do not use Big Data Analytics and AI).
- 3.107When the use of a material Big Data Analytics and AI model results in a technical or model-related error or failure, Institutions should:
- a.Be able to swiftly detect the error;
- b.Establish a process to review the error and rectify it in a timely manner, which may include notifying another function; and
- c.Report the error to relevant stakeholders if material.
- 3.108Institutions should establish a robust system for versioning and maintain record of each version of the material Big Data Analytics and AI model including but not limited to, where applicable:
- a.New Data used;
- b.Revisions to the documentation;
- c.Revisions to the algorithm;
- d.Change in the way variables are picked and used in the model or, where possible, the names of variables; and
- e.The expected outcome of the newly calibrated, re-trained or re-developed model.
Ethics
- 3.109Institutions should ensure that their Big Data Analytics and AI Applications promote fair treatment, produce objective, consistent, ethical, and fair outcomes, and also, are aligned with Institutions’ ethical standards, value and codes of conduct. Accordingly, they should:
- a.Comply with laws against discrimination and other applicable laws;
- b.Be produced using representative inputs and Data which have been tested for selection bias (further guidance provided in Clause 3.100);
- c.Consider whether a human-in-the-loop mechanism is needed to detect and mitigate biases;
- d.Retain the possibility of manual intervention to mitigate or reverse irresponsible and erroneous decisions;
- e.Retain the possibility of modification by the Institution; and
- f.Be explainable.
- 3.110Institutions should consider the fairness of a Big Data Analytics and AI model through understanding the biases and noise affecting Big Data Analytics and AI decisions. Institutions should define what it means for a Big Data Analytics and AI model to be fair.
- 3.111Institutions should consider and assess the impact that Big Data Analytics and AI models may have on individuals or groups of individuals to ensure that such individuals or groups are not systematically disadvantaged unless the decisions suggested by the models have a clearly documented justification. Institutions should take steps to minimize unintentional or undeclared bias.
Customer Protection
- 3.112Institutions should be transparent with Customers about their use of Big Data Analytics and AI through their conduct and through accurate, understandable, and accessible plain language disclosure. Institutions should:
- a.Ensure that Customers are informed of products and/or services that utilise Big Data Analytics and AI and the associated risks and limitations of the technology, prior to providing the service or each time Customers interact with the service (e.g. in the case of a Customer-facing service);
- b.Explain how to use the Big Data Analytics and AI Application to Customers and ensure Customers always have easy access to the instructions; and
- c.Provide clear explanations of the types of Data, types of variables and decision-making process used by Big Data Analytics and AI Applications upon Customers’ requests. To avoid doubt, clear explanations do not require exposure of Institutions intellectual property, publishing of proprietary source code or details on firms’ internal processes.
- 3.113Institutions should obtain each Customer’s acceptance of the risks associated with the use of Big Data Analytics and AI prior to providing the service.
- 3.114Institutions should put in place a mechanism for Customers to raise inquiries about Big Data Analytics and AI Applications and request reviews of decisions made by Big Data Analytics and AI Applications.