• No products in the cart.

How is AI affecting parts of the financial markets?

AI techniques are applied in asset management and the buy-side activity of the market for asset allocation and stock selection based on ML models’ ability to identify signals and capture underlying relationships in big data, as well as for the optimisation of operational workflows and risk management. The use of AI techniques may be reserved to larger asset managers or institutional investors who have the capacity and resources to invest in such technologies.

When used in trading, AI adds a layer of complexity to conventional algorithmic trading, as the algorithms learn from data inputs and dynamically evolve into computer-programmed algos, able to identify and execute trades without any human intervention. In highly digitised markets, such as equities and FX markets, AI algorithms can enhance liquidity management and execution of large orders with minimal market impact, by optimising size, duration and order size in a dynamic fashion, based on market conditions. Traders can also deploy AI for risk management and order flow management purposes to streamline execution and produce efficiencies.

Similar to non-AI models and algos, the use of the same ML models by a large number of finance practitioners could potentially prompt of herding behaviour and one-way markets, which in turn may raise risks for liquidity and stability of the system, particularly in times of stress. Although AI algo trading can increase liquidity during normal times, it can also lead to convergence and by consequence to bouts of illiquidity during times of stress and to flash crashes. Market volatility could increase through large sales or purchases executed simultaneously, giving rise to new sources of vulnerabilities. Convergence of trading strategies creates the risk of self-reinforcing feedback loops that can, in turn, trigger sharp price moves. Such convergence also increases the risk of cyber-attacks, as it becomes easier for cyber- criminals to influence agents acting in the same way. The abovementioned risks exist in all kinds of algorithmic trading, however, the use of AI amplifies associated risks given their ability to learn and dynamically adjust to evolving conditions in a fully autonomous way. For example, AI models can identify signals and learn the impact of herding, adjusting their behaviour and learning to front run based on the earliest of signals. The scale of complexity and difficulty in explaining and reproducing the decision mechanism of AI algos and models makes it challenging to mitigate these risks.

AI techniques could exacerbate illegal practices in trading aiming to manipulate the markets, and make it more difficult for supervisors to identify such practices if collusion among machines is in place. This is enabled due to the dynamic adaptive capacity of self-learning and deep learning AI models, as they can recognise mutual interdependencies and adapt to the behaviour and actions of other market participants or other AI models, possibly reaching a collusive outcome without any human intervention and perhaps without the user even being aware of it.

Figure 2. Impact of AI on business models and activity in the financial sector

 

AI models in lending could reduce the cost of credit underwriting and facilitate the extension of credit to ‘thin file’ clients, potentially promoting financial inclusion. The use of AI can create efficiencies in data processing for the assessment of creditworthiness of prospective borrowers, enhance the underwriting decision-making process and improve the lending portfolio management. It can also allow for the provision of credit ratings to ‘unscored’ clients with limited credit history, supporting the financing of the real economy (SMEs) and potentially promoting financial inclusion of underbanked populations.

Despite their vast potential, AI-based models and the use of inadequate data (e.g. relating to gender or race) in lending can raise risks of disparate impact in credit outcomes and the potential for biased, discriminatory or unfair lending. In addition to inadvertently generating or perpetuating biases, AI-driven models make discrimination in credit allocation even harder to find, and outputs of the model difficult to interpret and communicate to declined prospective borrowers. Such challenges are exacerbated in credit extended by BigTech that leverage their access to vast sets of customer data, raising questions about possible anti-competitive behaviours and market concentration in the technology aspect of the service provision (e.g. cloud).

The use of AI techniques in blockchain-based finance could enhance the potential efficiency gains in DLT-based systems and augment the capabilities of smart contracts. AI can increase the autonomy of smart contracts, allowing the underlying code to be dynamically adjusted according to market conditions. The use of AI in DLT systems also introduces, if not amplifies, challenges encountered in AI-based traditional financial products, such as lack of interpretability of AI decision-making mechanisms and difficulty in supervising networks and systems based on opaque AI models. At the moment, AI is mostly being used for risk management of smart contracts, for the identification of flaws in the code. It should be noted, however, that smart contracts have existed long before the advent of AI applications and rely on simple software code. As of today, most smart contracts used in a material way do not have ties to AI techniques and many of the suggested benefits from the use of AI in DLT systems remains theoretical at this stage.

In the future, AI could support decentralised applications in decentralised finance (‘DeFi’), by enabling automated credit scoring based on users’ online data, investment advisory services and trading based on financial data, or insurance underwriting. In theory, AI-based smart contracts that are self- learned1 and adjust dynamically without human intervention could result in the building of fully autonomous chains. The use of AI could promote further disintermediation by replacing off-chain third-party providers of information with AI inference directly on-chain. It should be noted, however, that AI-based systems do not necessarily resolve the garbage in, garbage out conundrum and the problem of poor quality or inadequate data inputs observed in in blockchain-based systems. This, in turn, gives rise to significant risks for investors, market integrity and the stability of the system, depending on the size of the DeFi market. Equally, AI could amplify the numerous risks experienced in DeFi markets, adding complexity to already hard-to-supervise autonomous DeFi networks without single regulatory access points or governance frameworks that allow for accountability and compliance with oversight frameworks.

 
Template Design © VibeThemes. All rights reserved.