The Artificial Intelligence (AI) market is booming, the proliferation of AI startups has been astounding, and the sheer scale of investments flowing into AI ventures, without precedent. What about AI in investment banking? In this article, we'll cover a range of dynamics you need to consider if you aim to leverage AI in investment banking.
The paradigm shift of AI in investment banking
According to Statista, funding in AI startups demonstrated consistent growth in the years preceding the COVID-19 pandemic, rising from $18 billion in 2017 to $26 billion in 2020. Thereafter, such investments displayed a substantial uptick, particularly in the latter half of 2021, as the novel necessity of remote work and increased cybersecurity concerns became paramount. This uptick became supercharged as investments soared from slightly over $30 billion in 2020 to over $65 billion in 2021; and then by 2022, global corporate investment in AI reached a staggering $92 billion, up from just $12.75 billion in 2015. This data record says it all, characterizing the meteoric growth of AI over the past decade.
A Crunchbase report revealed that global funding for AI startups surged to nearly $50 billion in 2023, marking a 9% year-over-year increase; and furthermore, the forecast for 2024 suggests a continuation of this upward growth trajectory. Not surprisingly, the largest investments went to leading foundation model companies such as OpenAI, Anthropic, and Inflection AI, who collectively raised $18 billion.
In summary, these numbers not only showcase the significant evolution of AI, they clearly signify a robust adoption and reception of this startlingly new technology by users, and the investment banking sector is certainly not lagging behind with respect to embracing this trend.
AI is boosting processes in investment banking
In July 2023, EY's CEO Outlook Pulse revealed in a survey that almost half of CEOs are investing in AI, with 43% fully integrating AI into their capital allocation processes. This is a clear sign that industry leaders are grasping the transformational nature of AI with respect to traditional processes in deal negotiations and M&A due diligence, for starters. They are becoming keenly aware that generative AI affordances offer potentially significant efficiency improvements in complex financial transactions and engagements.
In due diligence for example, Robotic Process Automation (RPA) is revolutionizing the process. Instead of relying exclusively on manual reviews, the use of AI to streamline data analysis saves time and costs. The benefits of AI extend to target identification and valuation, with generative AI tools analyzing diverse data sets to identify patterns, aiding in target selection. Additionally, specifically refined AI tools’ predictive modeling techniques simplify valuation by analyzing historical financial data, market trends, and macroeconomic factors—providing valuable insights for more effective negotiations and avoiding overvaluation or undervaluation. AI thus plays a pivotal role in enhancing efficiency and streamlining various aspects of investment banking.
Notwithstanding this extraordinary paradigm-shifting development, a new dimension in the regulatory space has opened up, with a unique and unprecedented landscape of risk revealing itself.
Understanding AI risks through FINRA's lens
The adoption of AI in investment banking is a fact. tWhile AI brings considerable benefits to deal-making, this rapidly-evolving technology raises crucial considerations with regards to risk management, data privacy, and security. The absence of organic human judgment and intuition in AI can pose serious challenges, particularly in the relational realm of deal risk assessment. Significantly, dependence on AI in investment banking might unintentionally result in non-compliance with regulatory frameworks. Key aspects of generative AI still require further development before investment bankers can completely rely on it. Even now, the regulatory regimes within the SEC and FINRA in the U.S. are already starting on the effort to craft AI-specific regulations, with the EU’s recently adopted AI Act providing a template from which to proceed
Regulatory obligations pertaining to AI, particularly generative AI, are taking shape; for example, FINRA stipulates engaging in pilots and deployments to address risk and compliance processes, along with expediting information delivery to the market. Firms moving forward—perhaps with unseemly haste— in the use of AI technologies will need to keep abreast ot increasing scrutiny from the SEC. And this is not to mention recent high-level developments in the Biden Administration’s Executive order from Q4 2023, “On the Safe, Secure, and Trustworthy Development and use of Artificial Intelligence (AI)”, marking a significant step in the U.S. government’s approach to regulating AI technologies.
Such an approach can now be seen instantiated in the 2024 FINRA Annual Regulatory Oversight Report, which delves into challenges and potential risks linked to AI in investment banking and within the securities industry. Key concerns outlined in the report include:
1 - Model explainability and bias:
The report emphasizes that AI-based applications may introduce unique challenges related to model explainability and bias. Firms are urged to review supervisory procedures to prevent the creation of an environment conducive to excessive risk-taking.
2 - Autonomous trading applications:
Notably, the report highlights challenges associated with AI in portfolio management and trading, particularly when applications are designed to act autonomously. Unforeseen circumstances, such as market volatility, natural disasters, pandemics, or geopolitical changes, may render AI models unreliable, potentially resulting in undesirable trading behavior and negative consequences.
3 - Regulatory compliance and risk management:
Firms are cautioned to assess potential risks tied to AI applications in areas like liquidity and cash management, credit risk management, and regulatory compliance. The report stresses the necessity of robust governance, supervision, and extensive testing of AI-based applications to promptly identify and mitigate concerns.
These insights underscore the critical importance of addressing the unique risks and singular challenges associated with the integration of AI technology in the securities industry. Robust governance, supervision, and risk management practices are imperative for ensuring the responsible and effective use of AI-based applications.
2023 was startling enough, in the exponential uptake of AI technology due in large part to the public’s embrace of OpenAI’s ChatGPT—but 2024 promises to be a watershed year in which this breakthrough technology will truly come into its own and permeate every aspect of every industry, including AI in investment banking and the capital markets.