Artificial intelligence (AI) is advancing at an extraordinary pace. The last couple of years have seen growing excitement and discussion about the growth in deployment across sectors – from healthcare and logistics to finance and creative industries – fuelled by breakthroughs in generative models, wider enterprise adoption, and massive infrastructure build-outs.
Today, nearly every company has, or is developing, an AI strategy.
Meaningful adoption is nascent but expectations are for this to shift to accelerate rapidly. Yet while there are promises of transformative economic and societal gains, it also brings complex and material sustainability risks that equity owners need to understand and have a stance on.
As can be seen from the below analysis of S&P earnings transcripts, the buzz is growing and, while highly concentrated within Technology, it is far from being solely the preserve of the tech sector.
 
											 
											AI’s potential to drive economic productivity, accelerate scientific discovery, and address systemic challenges – from climate modelling to resource optimisation – is immense.
However, this upside is shadowed by significant downside risks. The environmental toll of AI is mounting, with compute-intensive models driving steep increases in energy consumption at data centres (and a sharp rise in water usage for cooling). Worries about the impact of AI on society are no less acute. A primary concern is job displacement – on both white-collar and blue-collar roles. Meanwhile, the erosion of trust in information ecosystems, and the reproduction of systemic biases, has the capacity to exacerbate existing inequalities through un- or under-regulated AI systems. While our Fund has historically been underweight technology and, in particular, software companies, that has begun to change. As long-term investors, we play an important role in shaping how AI is developed and deployed – ensuring it aligns with sustainable, inclusive, and transparent principles.
We believe it necessitates targeted engagement across a number of key themes:
- Eco-efficient software (‘Green software’) and infrastructure to address the rising environmental footprint;
- Responsible and ethical AI governance to embed fairness, transparency, and accountability;
- AI safety and oversight, particularly in high-risk domains where unintended consequences can be severe;
- Transparency and explainability, enabling users and regulators to understand and scrutinise AI outputs;
- A just AI transition, ensuring that re- and up-skilling is prioritised, redeployment keeps pace with labour market disruption and productivity gains are shared equitably.
What have we done – shift towards IT
During the first half of 2025 the Fund began to shift some exposure from Industrials and Consumer Discretionary to Information Technology and in particular software; the Fund bought into Tyler Technologies and Samsara, two application software companies, during H1.
While much of the early focus – and investor excitement – has centred on infrastructure providers and foundational model developers, the true breadth of AI’s societal and economic impact will be mediated through its integration into enterprise and consumer applications.
The true breadth of AI’s societal and economic impact will be mediated through its integration into enterprise and consumer applications.
Application software firms are the critical layer translating AI capabilities into real-world productivity gains, customer experiences, and industry-specific solutions. These companies are therefore uniquely positioned to shape the responsible use of AI at scale, through the design of user interfaces, deployment safeguards, and data governance practices.
As such, engagement with this segment offers investors significant leverage – not only to capture the upside of AI adoption, but also to influence how responsibly and sustainably AI is embedded across the economy.
A responsible AI assessment framework
We have developed, and will continue to refine over time, a proprietary Responsible AI Assessment Framework.
This framework assesses a company’s level of ‘Responsible AI Maturity’.
Each assessment begins with a consideration of the individual use-cases for AI for that particular company:
- Do these AI use-cases present strategic, operational or risk management opportunities (or threats)?
- Do they additionally present limited, minimal or high risks for the company and/or society?
A subsequent ‘Governance Maturity’ analysis looks at:
- The extent to which the company exhibits the requisite level of ‘Knowledge’ through its executive and non-executive functions.
- The degree to which ‘Workflows’ and processes mitigate the risk and ensure delivery against an appropriate Responsible AI policy.
- The extent to which the company’s ‘Oversight and Disclosures’ support and demonstrate compliance with key ethical issues and adherence to and delivery against the company’s Responsible AI Policy.
These three principles are assessed through twelve metrics and result in a Governance Maturity categorisation of ‘Highly Advanced’ through to ‘Immature’.
At a later date, this process can be further supplemented by a 29-question, deeper-dive assessment spanning: external impact, governance capability, risk management, fairness and transparency.
This deeper-dive assessment ultimately informs and shapes our engagement priorities with individual companies.
Our engagement agenda
1. Eco-efficient software and AI infrastructure to address the rising environmental footprint.
Over the past 10-15 years, the global demand for compute power has expanded exponentially, while energy use at data centres has grown more slowly because of ever-improving efficiency processes.
However, data centre energy use is now forecast to explode – turbocharged by the demands of AI.
Industry leaders have acknowledged the unsustainable trajectory in the global demand for compute power should current trends persist.
While advocates argue that the latest microchips offer significant energy efficiency gains and that new AI model architectures should be more efficient, critics point out that efficiency gains are slowing while AI is being embedded into everything.
In tandem, the breakneck growth in software deployment will translate into greater demand for compute power and capacity – which will further drive energy consumption from electricity use in data centres, networks and devices, in the process likely ramping up carbon emissions (depending on the electricity’s source).
There is a strong likelihood that the explosive growth in AI – both for model training and inference workloads – will break the historic decoupled relationship between compute growth and energy-use.
The challenge associated with the energy and water intensity of data centres is well understood.
By 2030, global electricity consumption from AI data centres is expected to reach 945 TWh1 per year – that is slightly more than the entire electricity consumption of Japan today2.
While a lot of attention is being given to the energy and water intensity of data centres, and their potential impact on local demand for resources, the reality is that the best thing many enterprises can do to be more energy efficient (saving on costs and improving business resilience) is move their computing systems onto cloud-based infrastructure.
Here are a few other plain-English alternatives depending on your audience:
The scale and elasticity of cloud platforms – allowing multiple enterprises to share the same compute – combined with the fact that renewable energy is a key consideration when building a data centre make it a much more energy-efficient solution. It is notable that tech and web services companies were responsible for more than 90% of newly-contracted clean energy capacity in the US in 20243.
Nonetheless, all parties should remain mindful of the increasing energy demand footprint of AI-related infrastructure.
New research suggests that implementing various practical changes could potentially reduce AI energy demand by up to 90%4.
The true breadth of AI’s societal and economic impact will be mediated through its integration into enterprise and consumer applications.
It is also possible that cloud-pricing models could shift – in light of the supply and demand mis-match and the bottleneck presented by power availability – which would incentivise more efficient modal design.
Developers should, therefore, give careful consideration to how large AI models are trained.
Questions to ask management on this topic include:
- What steps have been taken to optimise model training and inference for energy efficiency?
- Are sustainability clauses included in cloud contracts?
- What percentage of workloads run on green energy?
More broadly, while microchips and data centres are continuing to become more efficient, and the grid is slowly becoming more sustainable, the principle of achieving more with less – in terms of energy input – is one that should resonate with every designer.
Resource-intensive inefficient software can have an impact on energy consumption – but seemingly small performance improvements can achieve significant improvements in aggregation.
At the present time, software design that seeks to incorporate efficiency is rarely a consideration, and when it is considered, it can lead to tensions with productivity.
In general, software engineers build more and more abstractions – known as layers – which make it easier to develop new software. However, each iteration has an impact on efficiency through the extra computation required – for example, interpreting code and managing memory – as well as more background processes.
It means that while abstractions speed up development, they often consume more central processing unit (CPU) cycles, memory and energy than lower-level optimised code.
Ultimately, at scale, this can aggregate into a significant increase in unnecessary energy use.
Therefore, while software is typically part of the solution in terms of environmental sustainability – it can also be part of the problem.
As hardware becomes more powerful and energy efficient, the impact of software on overall energy consumption becomes ever more significant.
As a result, it is important to design and use software to optimise energy consumption.
The rapid development and deployment of generative AI technologies, such as large language models (LLMs), only intensifies the urgent need for more of eco-efficient software development for a variety of reasons:
a) Exponential growth in compute demand
- Training and running GenAI models requires vast computational resources.
- As adoption scales up across various industries, the cumulative environmental impact is set to become significant.
b) Persistent and ubiquitous inference
- Unlike traditional software, GenAI models often run continuously in production – through features such as chatbots, content generation, copilots etc – leading to sustained energy consumption.
- Inference, not just training, consequently becomes a significant contributor to emissions because of high usage.
c) Investor and regulatory scrutiny
- Regulators in the EU and elsewhere are beginning to consider the environmental impact of AI under digital sustainability frameworks.
The Green Software Foundation is spearheading the push to reduce emissions associated with software. It advocates designing software that uses fewer resources – such as electricity, memory and bandwidth – over its lifecycle.
Even for SaaS (Software as a Service) enterprise tech companies –which as cloud-native operations have a better energy and emissions profile – the efficiency gains may still be offset by the rapid growth and increasing demands and complexity of AI.
Questions to ask of management include:
- Does the company have internal guidelines or a framework for green software development?
- What percentage of applications are optimised for energy efficiency?
- Are developers trained in sustainable coding practices?
- Are development teams incentivised to prioritise energy efficiency in code?
- Is energy usage per user or per transaction tracked?
Responsible and ethical AI governance
There is little doubt that advances in AI represent huge opportunities for society and, as a consequence, for investors too. However, there is equally little doubt that the breakneck pace of development is creating significant risks.
Despite the best efforts of policymakers, the development of technology always outpaces the implementation of regulation, while the flurry of excitement about a new innovation can incentivise action over prudence.
In a nutshell, responsible AI is about balancing – not hindering – innovation with the necessary management of risks that underlie the development and deployment of AI products and services.
Responsible AI is about balancing – not hindering – innovation with the necessary management of risks that underlie the development and deployment of AI products and services.
Responsible and ethical AI deployment (RAI) has the potential to resolve many societal challenges from access to healthcare to climate change. RAI governance, however, is essential to manage risks related to bias, explainability, and user trust. Strong oversight and transparency are vital.
There’s a natural tension between the early excitement surrounding AI’s potential, which, while justified, needs to be tempered with a prudent approach towards various foundational concerns. The adoption of AI carries risks of unintended consequences across ethical and technological dimensions, spanning vulnerabilities to new forms of cyberattacks, introducing unwanted biases, and creating job disruptions as well other reputational and legal risks. As a result, a dynamic framework for AI governance is essential. Without appropriate attention to Responsible AI, a mismanaged rapid deployment of the technology could put long-term corporate value at risk.
EOS at Federated Hermes Limited5 published an updated set of Digital Governance Principles in early 2025, building on papers from 2019 and 2022. We have leant on these Principles as well as work produced across industry to develop our Responsible AI Assessment Framework.
The EU AI Act – the world’s first comprehensive law regulating AI adopted in July 2024 – also provides valuable guidance on how to approach high-risk AI in safety-critical areas or related to people’s rights. The Act, rightly, places obligations on the AI user around risk management, data governance, transparency, human oversight, and cybersecurity.
 
											 
											2. A ‘just’ AI transition
Equitable response to workforce impact
While the opportunities presented by AI are potentially huge, there is understandable anxiety as to what the adoption of AI will mean for employment. Will there be mass displacement of jobs? If so, how rapidly? While there is anxiety on the part of employees, employers are excited about the potential for productivity. While it is impossible to forecast what will happen with jobs, what we can engage on is a company’s approach towards a ‘just’ AI transition – an equitable sharing of the spoils from productivity gains and the ultimate provision of better work resulting from AI deployment.
According to the US Bureau of Labor Statistics, the occupations most susceptible to AI disruption are those with tasks easily replicated by generative AI. However, AI is also projected to augment (or replace) roles in software development and engineering and there are many gloomy forecasts for the displacement of white-collar workers across legal, media, financial and professional services.
A 2025 McKinsey report6 highlights that, while 92% of companies plan to increase AI investments, only 1% consider themselves ‘mature’ in AI deployment.
This gap suggests that, while AI’s transformative potential is widely acknowledged, its integration into workflow processes remains nascent.
AI has the potential to contribute up to US$4.4tn in productivity gains globally, according to the report. However, realising this potential depends on strategic implementation and workforce adaptation which will require an engaged and trained workforce.
AI has the potential to contribute up to US$4.4tn in productivity gains globally.
In the short term, AI is likely to augment rather than replace most jobs, with automation focused on repetitive tasks. In the medium term, the workforce will need to adapt through reskilling and upskilling, especially in areas like data literacy, AI oversight, and human-AI collaboration.
In the nearer-term, AI is reshaping the early career landscape, particularly for graduates entering the workforce. Employers are placing greater emphasis on adaptability, digital literacy, and problem-solving, even in non-technical roles. As a result, graduate hiring is becoming more polarised, with strong demand in tech-adjacent fields and stagnation or decline in others.
Some employers are reducing their graduate intake in favour of upskilling existing staff or leveraging AI to reduce headcount.
In the US, for example, many recent graduates face an uphill challenge securing entry-level jobs. The unemployment rate for recent college graduates is at its highest level since 2012 – and is 75% higher than for all college graduates – representing a concerning trend for young workers.
 
											A lot of research looks at which occupations are most exposed to automation and a raft of surveys monitor AI adoption rates across various industries.
Outside of software, adoption rates remain relatively low.
However, many private sector companies are dipping their toes in the AI water and look keen to dive in fully.
Figure 4 illustrates the relative automation potential of various jobs compared to the relative real-word usage of AI tools. The message is clear: we are at the very early stages of labour disruption.
 
											Equitable response to workforce impact
There has been a notable increase in companies establishing an ‘AI First’ strategy that prioritises investment in AI over additional headcount. Some of these companies – such as Duolingo – have received criticism for adopting this approach while, interestingly, others have adopted and then reversed the policy. In early 2025, Klarna’s CEO acknowledged the company was seeking “a balanced approach combining scalable AI with the nuance and empathy of humans”.
Further disruption is inevitably on its way. In the near term, the companies that are likely to be the most successful in adopting AI as a workforce enhancement tool – augmenting processes and realising productivity gains – will be those companies which have earned the trust of their employees.
Once again, there is no substitute for a robust corporate culture. We are asking questions such as:
- What assessment has been undertaken looking at the near-and medium-term impact of AI adoption on the needs of the workforce?
- Will AI be complementary or substitutive to the existing workforce?
- What are the implications of AI adoption for current ‘junior’ roles and future entry-level hiring?
- What initiatives have been established to up-skill and re-skill workers?
- Fundamentally, how equitably will any productivity gains be shared?
Historically much of our engagement with IT holdings has centred around the need for the industry to diversify its talent pool, in terms of its existing workforce and, crucially, its talent pipeline.
The industry’s workforce can be unbalanced in terms of attracting people from different backgrounds. This imbalance creates three issues: i) competition for scarce talent resulting in excess wage inflation; ii) opportunity cost resulting from the narrow hiring pool; iii) risk of biases being ingrained in design processes.
As seen in Figure 5, wage inflation in the semiconductor sector has been very high in recent years. The semiconductor industry itself acknowledges that it has become harder to attract people than in the past, so the hiring funnel clearly needs widening.
 
											It is notable however, that the same wage inflation challenges are not an issue in the software sector. Indeed within the software industry – and within the hardware sector, to an extent – the need to hire is declining.
AI is rapidly reducing the demand for entry-level functions.
Senior industry figures suggest that the vast majority of coding tasks could soon be done by AI. This shift will naturally reduce demand at the bottom of the pipeline – where most diversity efforts are directed – weakening traditional levers to encourage workforce diversification.
AI, therefore, does not reduce the need for cognitive diversity – in fact it raises the stakes.
Furthermore, as AI’s role in decision making becomes more significant, it will throw up increasingly pressing ethical and social implications.
In such a scenario, workforce diversity – in particular cognitive diversity – could become extremely important to spot algorithmic biases, edge cases and exclusion risks.
Therefore, ensuring diverse representation in AI development teams is going to be critical to mitigate these risks, as well as to foster innovation.
As part of this effort, the focus will need to shift from graduate
hiring to retention, upskilling, internal mobility and non-traditional entrants.
Questions for companies:
- How are companies ensuring there is cognitive diversity in engineering and AI oversight teams?
- What are companies doing to upskill or reskill under-represented employees into emerging AI-related roles?
- Are hiring models adapting to value non-traditional candidates who may bring diverse thinking beyond technical pedigree?
Conclusion
This report outlines why – in light of the rapid developments in AI and the early-stage adoption across industry – we have chosen to purposefully scale up our AI-related engagement efforts with a range of companies, particularly in the software space.
For businesses everywhere, the adoption of a Responsible AI strategy is essential if they are to take advantage of this new technology in a successful way.
For companies in the software industry, the need to translate AI capabilities into real-world solutions is particularly acute. If it is done well, progress towards multiple UN SDGs7 – from Good health and Well-Being through to Reduced Inequalities and Climate Action – should be supported.
Additionally, all companies need to consider the implications of AI adoption for the needs of their workforce. The potential impact on skills and headcount requirements should be thought through carefully and managed sensitively.
An engaged workforce is vital to the successful delivery of a company’s AI strategy, but employee engagement should not be taken for granted – the successful adoption of this new technology will be challenged without it. It is in our collective interest that the implementation of AI across industry should result in better work and more productive workplaces.
Global SMID Equity Engagement H1 report 2025
For information on Global SMID Equity Engagement
Please note the Federated Hermes Global SMID Equity Engagement Fund forms part of the SDG Engagement Equity composite Strategy. The SDG Engagement Equity Fund changed its name to the Global SMID Equity Engagement Fund on 24 April 2025.
1 TWh stands for terawatt-hour. It is a unit of energy, representing one trillion (1,000,000,000,000) watt-hours
2 International Energy Agency (IEA)
3 US tech companies contract 48GW of clean energy year on year amid AI boom – report – DCD
4 Practical changes could reduce AI energy demand by up to 90% | UCL News – UCL – University College London.
5 EOS at Federated Hermes Limited (EOS) is a world-leading stewardship service provider. Founded in 2004 on a legacy dating back to 1983, EOS delivers corporate engagement and proxy voting services.
6 AI in the workplace: A report for 2025
7 Sustainable Development Goals (SDGs): The SDGs are a set of 17 interconnected goals that were adopted by all UN member states in 2015. They are a universal call to action to end poverty, protect the planet and improve the lives and prospects of everyone, everywhere, by 2030. Learn more here: https://www.undp.org/ sustainable-development-goals
BD016734






