Artificial Intelligence (AI) will transform how businesses operate, and in many industries integrating the technology will be essential for survival. Companies recognise that embracing AI, for example by optimizing their supply chains, opens new paths for boosting productivity. As with the internet and the cloud, AI will quickly become business as usual.
True monetization from AI has barely begun.
We recognise, however, that there is currently considerably more talk than action, and often more optimistic talk where current profitability is challenged. True monetization from AI has barely begun. We anticipate the greatest short-term gains being on the enterprise, rather than consumer side, and we look for credible plans to boost efficiency rather than companies embracing the AI buzzword without identifying how the technology can be implemented.
Predicting which companies will leverage AI most effectively in the next 10 years is challenging due to both the rapid progress in AI innovation and the evolving landscape of AI regulation. The uncertainty is further amplified by the remarkable pace at which computational resources continue to expand, fuelling AI’s capabilities and applications. In 2023, the sheer depth and scale of AI’s potential has driven a rally in AI hardware stocks, for example specialist semiconductor and infrastructure providers. The question then becomes when and where will the rally spill over into software stocks.
With monetization in such early stages, and with valuations of the companies more directly exposed to AI at lofty levels, it is inevitable that headlines will question when the “bubble” will burst. This is the typical trajectory for a transformational technology. There will be mistakes, there will be pockets of irrational exuberance, and no doubt the market will at some point step away from these names due to fear rather than fundamentals. We foresee 2024 as the year in which we see the first steps towards AI implementation, helping to turn the hype into reality.
Google Trends – mentions of generative AI, December 2022 – November 2023
AI is fast becoming one of the most important themes in investment. While AI has the potential to drive a fourth industrial revolution and is creating unprecedented new opportunities for businesses, it introduces new ethical dilemmas and risks.
The need for ethical AI was highlighted earlier in the year when over 1000 researchers and executives called for a halt to what they described as a ‘dangerous’ arms race in AI development. More recently, an inaugural global AI Safety Summit, hosted by the UK at Bletchley Park, sought to address the risks posed by frontier AI.
The potential risks of AI are well documented and include misinformation, unintended bias, a lack of transparency or explainability, and disruption of the workforce. The urgency to address these concerns has led to a global, yet fragmented, race towards new regulation – from the EU’s AI Act to China’s Generative AI Regulation.
EOS has been in dialogue with companies about AI since 2018 and currently engages on over 60 AI-related objectives and issues. Our Digital Rights Principles set out our expectations for companies to disclose how AI algorithms work, the variables considered, and to allow users to decide whether these should shape their experiences. They also call on companies to eliminate unintended racial, gender, and other biases. Much of our engagement is aimed at ensuring that companies establish ethical AI governance principles, which we view as foundational to effective risk mitigation.
As AI deployment accelerates, we expect the importance of strong AI governance to become more apparent.
In 2024, as AI deployment accelerates, we expect the importance of strong AI governance to become more apparent. In 2023, companies began to face industrial and legal action over issues ranging from workforce disruption to unintended bias and misinformation. In 2024 this trend is likely to continue, given the increasing number of use cases for AI. We expect that, while governments will seek to establish common ground on regulation, regional differences will persist, given conflicting priorities regarding innovation and end-user protection. This will make it essential for companies that operate internationally to adhere to high standards of AI ethics and self-regulate in a manner that can mitigate risk across multiple jurisdictions.