• No products in the cart.

3.7.Regulatory considerations, fragmentation and potential incompatibility with existing regulatory requirements

Although many countries have dedicated AI strategies (OECD, 2019[5]), a very small number of jurisdictions have current requirements that are specifically targeting AI-based algorithms and models. In most cases, regulation and supervision of ML applications are based on overarching requirements for systems and controls (IOSCO, 2020[78]). These consist primarily of rigorous testing of the algorithms used before they are deployed in the market, and continuous monitoring of their performance throughout their lifecycle.

The technology-neutral approach that is being applied by most jurisdictions to regulate financial market products (in relation to risk management, governance, and controls over the use of algorithms) may be challenged by the rising complexity of some innovative use-cases in finance. Given the depth of technological advances in AI areas such as deep learning, existing financial sector regulatory regimes could fall short in addressing the systemic risks posed by a potential broad adoption of such techniques in finance (Gensler and Bailey, 2020[102]).

What is more, some advanced AI techniques may not be compatible with existing legal or regulatory requirements. The lack of transparency and explainability of some ML models and the dynamic nature of continuously adapting deep learning models are prime examples of such potential incompatibility. Inconsistencies may also arise in areas such as data collection and management: the EU GDPR framework for data protection imposes time constraints in the storing of individual data, but AI-related rules could require firms to keep record of datasets used to train the algorithms for audit purposes. Given the sheer size of such datasets, there are also practical implications and costs involved in the recording of data used to train models for supervisory purposes.

Some jurisdictions, such as the EU, have identified a possible need to adjust or clarify existing legislation in certain areas (e.g. liability) in order to ensure an effective application and enforcement (European Commission, 2020[84]). This comes because of the opaqueness of AI systems, which makes it difficult to identify and prove possible breaches of laws, including legal provisions that protect fundamental rights, attribute liability and meet the conditions to claim compensation. In the medium term, regulators and supervisors may need to adjust regulations and supervisory methods to adapt to new realities introduced by the deployment of AI (e.g. concentration, outsourcing) (ACPR, 2018[33]).

Industry participants note a potential risk of fragmentation of the regulatory landscape with respect to AI at the national, international and sectoral level, and the need for more consistency to ensure that these techniques can function across borders (Bank of England and FCA, 2020[88]). In addition to existing regulation that is applicable to AI models and systems, a multitude of published AI principles, guidance, and best practice have been developed in recent years. While these are all seen by the industry as valuable in addressing potential risks, views differ over their practical value and the difficulty of translating such principles into effective practical guidance (e.g. through real life examples) (Bank of England and FCA, 2020[88]).

The ease of use of standardised, off-the-shelf AI tools may encourage non-regulated entities to provide investment advisory or other services without proper certification/licensing in a non-compliant way. Such regulatory arbitrage is also observed with mainly BigTech entities making use of datasets they have access to from their primary activity.

 
Template Design © VibeThemes. All rights reserved.