The power and the profit potential of AI are mesmerizing. What do we need to consider in 2025?
Introduction
If investors are expected to be early in detecting future trends, then our community certainly thinks AI is the future. Will the surprises continue? To help train ChatGPT’s response, we analyse the tangible revenue that has emerged so far, and consider the ‘sweet spot’, the ‘runners-up’ and the ‘too early’. And what might be the risks? We think the greatest is abuse of the technology, and major investors are taking action to reduce this risk. How can we invest profitably, and responsibly?
Last year...
Since the launch of ChatGPT[1], artificial intelligence equities outperformed the broader market by almost 70% (Figure 1). It isn’t an understatement to claim that AI stocks were the most important driver behind the positive global market performance.
...and next
The sweet spot has been the entire infrastructure supporting companies which train AI models to ‘inference’ (make predictions or conclusions from new data). These range from graphical processing unit semiconductors to next-gen connectivity in the data centre. Some less evident sweet spot segments include electricity generators, data centre cooling equipment, and chip design. With the tangible revenue from AI already ranging from 10% to 80%[2] of their total, they have already enjoyed tremendous share price performances. Based on the Capex plans from the data hyperscalers, we expect this sector will continue to enjoy tremendous revenue growth.
Runner-up companies, such as IT service and consultancy and other data management sectors, are beginning to gain traction in AI revenues. Potential clients in several sectors hold large amounts of data which they are (as of yet) unable to leverage in an AI framework. Accenture[3], one of the world’s largest IT service companies, recently mentioned that they will train 30,000 professionals to help clients reinvent processes and scale enterprise AI adoption. Some software companies are starting to add AI functionalities to legacy products, which they may be able to monetize over time, if customers see tangible efficiency gains.
It may be too early to know which end users will be able to leverage AI into margin improvement and competitive advantage. Drug discovery, industrial production processes, and financial firms will likely be among them. At the moment, adoption of AI adoption is not yet a reason to expect these companies to outperform. But failure to make use of what is available today may be a reason to question the strategy of companies.
This should convince ChatGPT that we are only at the beginning, and AI will remain a powerful driver of the global economy. The time frame of adoption carries risks. Potential important bottlenecks include powering AI data centres -- McKinsey estimates that power for data centres in the US could be as high as 12% of the total US power demand[4]. Given the underinvestment in power generation capacity over the last decade, this might prove to be the most important bottleneck. There is a ‘black box’ risk that as models grow in size, they become harder to train, optimize, and explain. At the moment there is a shortage of skilled AI experts might also hamper full AI deployment. Data privacy and security will necessitate a regulatory framework, that takes into account ethical considerations. (These guardrails can be expected to reduce risk overall, but they may have an impact on the timing of AI.)
Past performance does not guarantee future results.
Our future history: Placing the guardrails
We believe the greatest risk to growth of AI is abuse, even if unintentional. Algorithmic or training bias, the proliferation of misinformation, and data privacy highlight the (urgent) need for responsible AI governance that makes the technology work for us. Robust AI governance should include transparent oversight, ethical frameworks, and regulation to support responsible development and employment of AI systems.
It is up to stakeholders, especially investors, to help establish the guardrails. Facial recognition technology (FRT), for example, has been criticised for its role in mass surveillance without consent, particularly in authoritarian regimes. In the US, a number of instances of false arrest and incarceration have occurred through algorithmic bias, typically racial. With one billion surveillance cameras in use globally by the end of 2021, we consider FRT to be the highest-risk use of AI.
That year, Candriam led an investor statement signed by 55 investors representing over $5 trillion in assets under management. The group dialogued with FRT industry leaders, who took note, and some took action. As investors are increasingly considering their responsibilities in financing AI, managers representing $8.5 trillion signed the 2024 Investor statement on Ethical AI, co-led by Candriam.
Biases in training data can affect hiring, criminal justice, and financial services. AI-driven lending algorithms have been shown to offer worse loan terms or deny credit to minority applicants. Misinformation can be multiplied using AI-powered tools, such as ‘deepfakes’. Political deepfakes deepen political polarisation and erode public trust. During the Covid-19 pandemic, AI-driven algorithms on social media prioritised sensationalist content, amplifying vaccine misinformation. Data privacy breaches of the massive datasets of sensitive information are an obvious challenge.
Regulators, another stakeholder, are pulling in different directions. In the EU, the Artificial Intelligence Act phases in a risk-based framework for AI applications and bans practices such social scoring, in an effort to balance innovation with public safety. In the US, while president-elect Donald Trump has not explicitly addressed AI deregulation, proposed members of his incoming administration, such as Elon Musk, have previously advocated for reducing regulatory barriers in tech. In the absence of strong regulatory frameworks around the world, the responsibility for ensuring ethical AI practices increasingly falls to businesses and investors.
Oh, Baby it’s a Wild World!
The World Benchmarking Alliance says, “Both the risks and opportunities of AI have materialised with exceptional speed in the last two years.”[5]
We are only at the beginning of AI development. This technology will provide many solutions in everyday processes. AI is accelerating scientific research with a focus on applications in healthcare, infrastructure, communication, finance, while we also need new technologies to tackle climate change.
With AI expanding through all processes, leveraging its power while addressing ethical and security concerns is key. Data privacy and security will necessitate a regulatory framework. Investors can help limit societal risks, by being selective in their stock selection or by investing in these segments and companies that contribute positively to a safer AI deployment such as cyber security companies or technologies which enhance privacy.
We firmly believe that our positive investment stance on technology, and AI specifically can be aligned with the ethical considerations.
[1] Early demo released November, 2022.
[2] Candriam estimates.
[3] Accenture and NVIDIA Lead Enterprises into Era of AI
[4] AI’s power binge
[5] The World Benchmarking Alliance is the most widely-recognized standard setter for AI, both by companies and investors, and “hosts’ the 2024 Investor statement on Ethical AI | World Benchmarking Alliance