Enviar por Email

image_pdf

IOSCO Consultation Report on artificial intelligence in capital markets: use cases, risks and challenges

image_pdf

June 2025

Since the publication of IOSCO’s last report on Artificial Intelligence (AI) in 2021, AI technologies have undergone significant developments, including the emergence of large language models (LLMs) and generative AI systems (GenAI). Recent advancements in AI have expanded the range of AI applications in financial markets, leading to potential benefits as well as to potential risks.

IOSCO’s most recent work on AI is currently being developed by its Financial Technology Task Force (FTF) as a two-phased approached:

The first phase, consisting of this report, aims to develop a common understanding among IOSCO members of the issues, risks and challenges presented by emerging AI technologies used in financial products and services, from the perspective of investor protection, market integrity and financial stability, and to identify the way some IOSCO members have started responding to recent developments. The report relies on research conducted in 2024 by the IOSCO FTF AI Task Force, through surveys, stakeholder outreach and literature reviews, to collect information on current and potential future uses of AI systems in financial products and services.

The second phase of IOSCO’s work on AI will be to consider, if appropriate, the development of additional tools, recommendations or considerations to assist its members address the issues, risks and challenges posed by the use of AI in financial products and services. IOSCO will continue to play a coordinating role regarding AI developments in the financial sector and to engage with other relevant international organisations, such as the Financial Stability Board (FSB).

This document was submitted for public consultation until 11 April 2025 to receive comments on its content and other potential future focus areas from financial market participants, AI developers, academics, researchers, policy experts and other stakeholders.

What AI use cases in capital markets have been identified?

AI technologies have become increasingly common to support decision-making processes, in applications and functions such as robo-advising, algorithmic trading, investment research, and sentiment analysis. Regulated firms and third-party providers are also using AI technologies to enhance surveillance and compliance functions, particularly in anti-money laundering and counter-terrorist financing related systems.

Firms are considering the application of recent AI developments to support internal operations and processes through the automation of certain tasks, such as coding, information extraction, text classification, clustering, summarization, transcription, translation and drafting. They are also enhancing communication with clients through chatbots. As for GenAI, in particular, capital market participants seem to have prioritised internal lower-risk implementations, focused on enhancing internal productivity, generating insights, or improving risk management, rather than customer-facing applications.

Based on the results of surveys conducted by IOSCO, the most frequent AI uses among market participants, including broker-dealers, asset managers, and exchanges, were the following:

  • Anti-money laundering and counter terrorist financing (50%)
  • Internal productivity support (50%)
  • Market analysis and trading insights (40%)

As for the European Union, ESMA studied the use of AI in its markets in February 2023 and found varying metrics on the proportion of its registrants using AI. Among the most prominent use cases is the application of natural language processing for investment research. Based on the regulatory and marketing documentation from EU-domiciled mutual funds, ESMA identified 54 entities (less than 0.2% of EU mutual funds) promoting their use of AI. ESMA detected a growing relevance of third-party AI system providers, the use of AI in trading and a large proportion of credit rating agencies using AI and natural language processing as part of their research, writing or internal processes¹. In a survey conducted in October 2023, ESMA concluded that most credit rating agencies and market infrastructures were either already using GenAI or (most frequently) were planning to start using them in the near future.²

¹ European Securities and Markets Authority (2023), Artificial Intelligence in EU Securities Markets

² European Securities and Markets Authority (2024), TRV Risk Monitor

What are the risks, issues and challenges relating to investor protection, market integrity and financial stability?

The most frequently cited risks regarding the use of AI in the financial sector include the following:

  • Malicious uses of AI:

    The most frequently cited risks by respondents were cybersecurity, data privacy and protection, fraud, market manipulation and deepfakes.

    Cybersecurity risks, particularly those associated with AI developments, have been categorised as follows: (i) attacks using AI, (ii) attacks targeting AI systems and (iii) AI design and implementation failures.

    Should the malicious use of AI techniques become widespread, investor confidence in the source and veracity of digital information and communications would be damaged, to the point of posing wider risks, such as a decline in financial markets’ trustworthiness. In addition, relying on the general use of AI technologies could decrease, hindering the development of such technologies and their use for beneficial purposes.

  • AI models and data considerations:

    As for AI models, the most frequently identified risks were those related to (i) explainability and interpretability, (ii) model bias, (iii) complexity, (iv) robustness and resiliency, (v) hallucinations, and (vi) conflicts of interest.

    As for data, risks related to data quality (where data is representative, accurate, relevant, etc.), data drift (where the training data, and hence a model trained on that data, becomes unrepresentative), and data bias (where datasets are not sufficiently diverse or representative, and may contain unfair biases) were the most frequently mentioned.

  • Concentration, outsourcing, and third-party dependency:

    Concentration of data aggregators poses specific risks if AI developments related to algorithmic trading, robo-advising and asset management are used. Concentration, outsourcing and dependencies detection and follow-up entail significant challenges. For example, IOSCO was unable to reach a clear understanding of the range of AI model types that are being used across financial services, including the role of proprietary versus open models.

    Interactions between humans and AI systems:

    These risks include lack of accountability and regulatory non-compliance, insufficient oversight and talent scarcity, and over-reliance on technology for decision making.

In addition, IOSCO has identified, subject to monitoring, the interconnectedness between financial institutions and their service providers, the herding among market participants and “collusive” or “scheming” behaviours, which could have a negative impact on markets, although there is insufficient data for their assessment.

What measures have market participants applied for risk management and what regulatory solutions have been adopted?

Industry practices in risk management and governance are evolving. Some financial institutions are incorporating AI into existing governance and risk management structures, while others are establishing tailored AI governance and risk management frameworks.

Firms reported several relevant features of AI development that have influenced their considerations regarding risk management and governance:

  1. The embeddedness of AI technology into systems that can be accessed or used by employees across the organisation, not just by IT and data scientists, on all their devices. With this trend, some firms have acknowledged the need to build controls around AI use holistically across the organisation and must, therefore, consider how to educate and train a much broader population of employees on topics such as data protection and privacy, computer hygiene and regulatory obligations implicated by various uses of AI technology in regulated activity.

  2. With the introduction of GenAI, LLMs and General Purpose AI, risk management and governance may not neatly fit within one organisational line. The risk profile associated with various AI use cases can encompass responsibilities across multiple organisational lines within a firm. As firms seek to integrate more recent AI technologies, they have acknowledged the need to staff cross-disciplinary risk management and governance teams, with employees from different areas and with sufficient seniority and expertise.

    It is important to note that firms have acknowledged that proper governance entails the appropriate ‘tone from the top’ and have included senior people in the risk management and governance process, often reserving a senior position for a ‘Chief AI Officer’.

  3. Data criticality and cybersecurity. Risk management and governance of AI use requires attention not only to the models used, but also to the data used to train AI models to assess data quality and provenance. To ensure the protection of firm and customer data, focusing on cybersecurity issues arising from the models and data, as well as on the environment in which the AI model is deployed, is necessary.

  4. The non-deterministic nature of certain AI technologies, which rely on probabilistic algorithms, makes AI models potentially unpredictable and difficult to explain. As firms explore the use of such AI technologies, they may be focusing on whether they can mitigate damage by preventing certain negative outcomes, rather than complying with a specific set of requirements. This analysis usually examines the potential impacts of a particular use case, given its capabilities, and determines whether these go under an acceptable range or if they can be properly mitigated.

In general, larger financial firms seem to be using risk management and governance frameworks for internal AI strategy, processes, risk management and technology incorporation including some or all of the following principles: (i) transparency, (ii) reliability, robustness and resilience, (iii) investor and market protection, (iv) fairness, (v) security, safety, and privacy, (vi) accountability, (vii) risk management and governance, and (viii) human oversight.

Regulatory responses to the use of AI in the financial sector include (i) applying existing regulatory frameworks (“technology-neutral” approach), (ii) issuing specific guidance, and (iii) developing new regulatory frameworks.

As for the EU, the report mentions the following:

  • Specific ESMA guidelines, from May 2024, for firms using AI when providing investment services to retail clients including examples of when the use of AI technologies by investment firms would be covered by requirements under MiFID II, such as customer support, fraud detection, risk management, compliance, and support to firms in the provision of investment advice and portfolio management.

  • Bespoke framework under the EU Artificial Intelligence Act. Regarding the financial sector, the EU AI Act identifies two high-risk use cases: i) AI systems used to evaluate the creditworthiness of natural persons and for risk assessments and 2) pricing for life and health insurance.