Search this website. You can use fund codes to locate specific funds

AI: brave new worlds

Artificial intelligence (AI) is transforming the way we live and work. But while AI has the potential to create efficiencies and boost economic output, it also poses ethical and legal challenges. If the sustainability issues surrounding AI are scrutinised and addressed, we think the technology has the potential to serve both business and humankind – and create investment opportunities along the way.

A new paradigm

It is now 70 years since Alan Turing published his seminal paper 'Computing Machinery and Intelligence', in which he considered if machines could think. Since then, the field of AI has evolved rapidly – but only recently did it progress from the realms of science fiction to a reality that is transforming our lives, societies and civilisations (see timeline).

Timeline: the evolution of AI

Source: Hermes, press reports, as at August 2019.

Data-heavy industries like finance, ecommerce and healthcare are at the frontline of this radical change. Developments in these areas will have major consequences – both positive and negative – for customers, suppliers, employees and investors.

AI is set to become a core part of the ecosystem in these sectors, radically altering business models, creating efficiencies and generating investment opportunities along the way. Given the rapid pace of innovation, tomorrow’s index leaders may not even exist today. Inherent in this phenomenon, however, are sustainability risks, making it imperative that investors understand the wide-ranging commercial and social implications of AI.

The application of AI in data-intensive industries has already been hugely disruptive and is set to become even more so. More than 70% of senior executives expect AI to play a significant role in their operations within the next five years, up from 20% in 2017.1 Meanwhile, global spending on AI is forecast to grow to over $52bn a year by 2021, equivalent to a compound annual growth rate of 46% a year since 2016.2

Forecasts looking at the potential impact of AI on the global economy suggest it offers considerable opportunities. By one estimate, integrating AI into society will mean that economic output is 16% higher in 2030 than it is in the base scenario (see figure 1).

Figure 1: Ratcheting up

Source: McKinsey, as at August 2019. 

Finance

The prevalence of AI in the world of banking and finance is relatively low compared to other sectors. This is apparent when looking at the number of AI-related patents filed in the industry relative to others (see figure 2). Nonetheless, the sector is ripe for disruption and a wealth of opportunities exist.

Figure 2: Room to grow

Source: WIPO, as at August 2019.

According to the World Economic Forum (WEF), financial-services institutions alone will have invested $10bn in AI by next year as they seek to exploit opportunities created by machine learning. That will help the global banking industry make savings of $1tn in reduced operating costs by 2030 across front-, middle- and back-office functions.3 From predictive analysis of structured and unstructured data and complex trading algorithms to automated anti-money laundering and know-your-customer compliance checks, AI is pervasive and is expected to create $1.2tn in annual value for financial services by 2035.4

Healthcare

In healthcare, the deployment of AI as an assistive tool could increase accuracy and reduce diagnostic errors. DeepMind has taught machines to read retinal scans with at least as much accuracy as an experienced junior doctor.5 AI can also lead to cost savings: a new AI tool for the early detection of heart disease, developed by researchers at the John Radcliffe Hospital in Oxford, could save the UK’s National Health Service (NHS) an estimated £300m a year.6

A recent report by former UK health minister Lord Darzi suggests that AI-enabled automation of administrative tasks like booking appointments and processing prescriptions could save the NHS £12.5bn a year.7 Meanwhile, a healthcare start-up has developed AI bot applications that it claims can diagnose medical conditions and deliver clinical triage with more accuracy than human doctors, although this is disputed by the Royal College of General Practitioners, a trade body.8

Ecommerce 

Online and mobile commerce platforms – from Amazon to Alibaba – have pioneered the development and exploitation of AI. Even if they do not grasp the technical aspects of the algorithms that underpin them, most consumers in the developed world are familiar – and comfortable – with chatbots, recommendation engines, predictive sales and warehouse automation. AI applications deployed in ecommerce can analyse complex data sets, identify patterns, create unique, personalised customer experiences and use techniques like visual search – seeking products by entering images into search engines – and voice assistance.

By 2020, some 85% of all customer interactions will be with chatbots and self-serve technologies.9 Personalising the digital-shopping experience and customer journey could boost profitability in wholesale and retail sectors by 59% in the years to 2035.10 Retailers who have introduced AI-driven personalisation strategies have seen sales grow by up to three times the rate of rivals.11

Use of voice-controlled virtual assistants, such as Amazon’s Alexa, Apple’s Siri and Google Assistant, is becoming widespread. These intelligent assistants run on an AI application called natural-language processing which enables voice-activated product and service searches. However, they will soon expand their capabilities to embrace customer-experience issues including payment processing, real-time support and complex query handling.

To capture these opportunities and help maintain the long-term viability of AI and its commercial applications, investors and engagers must fully understand the risks involved and work with companies to pre-empt them. From an environmental, social and governance (ESG) and sustainability perspective, a clear understanding of the overall impact of AI is critical to detecting and preventing machine-learning models that discriminate against or exclude certain groups and individuals.

Rise of the machines: defining the terms

In order to assess the risks and opportunities presented by AI, we need to determine exactly what we mean by the term. A universal definition is elusive, given the variety of processes embraced by the phrase. The WEF has, however, attempted to pin down a term, defining  AI as “a set of capabilities enabled by adaptive predictive power and displaying some degree of autonomous learning”.12 AI machines can detect patterns, predict future events, generate rules, make decisions and interact with humans. It is markedly different from robotics, which is the software-driven automation of repetitive tasks.

It is also helpful to distinguish between general and narrow AI systems. Most, if not all, of the AI currently in use is narrow and single-task specific. It typically involves improving the performance of a highly specialised task – playing games, recognising images, filtering spam or even driving a car – by finding patterns in vast data sets.

Artificial General Intelligence (AGI), also known as strong AI, refers to machines that display intelligence with similar cognitive functioning, aptitude and experiential understanding to human beings, but with the ability to process data at superhuman speeds. AGI does not exist at present and no consensus exists on when, or indeed if, it is likely to emerge. Forecasts range from a decade to never.

AGI could have the capacity to reason, plan, navigate uncertainty and be creative. Its popularity as a theme in science fiction (think of movies like 2001: A Space Odyssey, Metropolis, The Matrix and Terminator) has created general misconceptions about the nature and potential of AI. This can lead to unrealistic expectations and unrealisable hype among consumers and investors.

Much of the risk surrounding AI stems from the fact that its systems are, in the words of its creators, black boxes (an industry term meaning their algorithms are opaque). Industry leaders often use complex equations and sometimes don’t provide clear explanations about their machine-learning processes. This lack of openness, transparency and public scrutiny is a major risk when it comes to accountability and reputation.

Identifying AI risks: the good, the bad and the ugly

The digitisation and processing of consumer information creates efficiencies which can, in turn, deliver value. But it can also result in risks and vulnerabilities. It is not disputed that AI technology can create ethical, technological and legal challenges. Indeed, Amazon and Alphabet13 have both highlighted such concerns in the ‘risk factors’ sections of their annual reports. Sustainable investment in the AI technologies of the future requires an understanding and appreciation of the nature and extent of the risks involved.

Discrimination

One of the most infamous examples of discrimination came in 2016 when Microsoft’s AI chatbot, Tay, had to be withdrawn after it made racist comments and denied the holocaust.14 Tay was designed to learn from interactions with Twitter users: its racist outburst was caused by the malicious comments it had processed.15 It remains a pertinent reminder of AI’s unintended but harmful consequences.

AI can also unintentionally lead to exclusion. There has been controversy over the fact that AI digital assistants like Siri and Alexa have female voices as their default settings, with some commentators suggesting that this reinforces gender stereotypes. In response, Google and Apple have started to provide a non-default male voice option. The gender-bias allegations also led to the development of Q – the world’s first genderless voice – which is aimed at ending the exclusion of non-binary users.

Another form of discrimination is dynamic pricing, which harnesses the power of AI data analytics to predict how much individual consumers are willing to pay for items sold in online market places, based on their past transactional behaviour. This form of targeting effectively uses the customer’s data against them and has clear ethical ramifications.

Financial contagion

The prevalence of AI exposes the financial sector to broader risks of contagion as connectivity across borders and systems increases. Given the vulnerability of financial markets to herd behaviour, there is potential for a single shared algorithm to replicate a bad trading decision across multiple institutions with severe implications for markets and even economies.16 The risk stems from the tendency of algorithms to react instantly to set market criteria and movements. This can greatly amplify systemic risk by intensifying volatility, creating ripple effects and undermining confidence in markets. The 2010 US ‘flash crash’ in equity and futures indices is one example of an algorithm-driven shock to markets.

This contagion risk from overdependence on the capabilities of machine learning was highlighted in a 2017 report by the UK’s Financial Stability Board.17 The reporting panel, led by Mark Carney, the Governor of the Bank of England, noted that the trend towards eliminating human judgements meant that financial institutions were shifting towards a shared view of risk that was decided by AI. It pointed out that the network effects and scalability of AI solutions, combined with their opaqueness, could result in unintended consequences.

As financial firms come to rely on a small group of third-party developers and services, the technical foundations that underpin global financial systems become susceptible to the same potential flaws and risks.

Medical misdiagnosis

In medicine and healthcare, ethical commentators have drawn attention to the fundamental dichotomy between the technology world’s ethos of ‘move fast and break things’ and the medical profession’s axiom of ‘first, do no harm’.

AI in medicine should heed the valuable cautionary tales of previous diagnostic breakthroughs. The decades-long follow-up analysis on computer-aided diagnostic (CAD) tools in the field of mammography is a case in point. These breast-cancer screening systems appeared to work well when they were introduced in the 1990s. However, analysis revealed that the CAD tool in question not only increased the number of false positives and unnecessary recalls but also missed some actual cancers.18

Cyber-attack and intrusion

Critical AI systems are emerging as attack targets much as the internet rapidly attracted the attention of cyber criminals and hackers in the 1990s. In 2017, a total of 978m people across 20 countries were affected by AI-enabled cyber crime, at a cost of $172bn.19

AI has already demonstrated its potential as a hacking tool. Malicious applications of machine learning can track and mimic individuals’ behaviour on social media, impersonating their digital presence to create customised phishing tweets or emails that are highly specific and personalised.

AI’s propensity for identifying and interpreting patterns allows it to exploit vulnerabilities in systems. Intelligent malware and ransomware can learn as they spread, while machine intelligence and advanced data analytics can coordinate and customise global cyber-attacks.20 Aggressors can also use deep-learning technologies to infiltrate IT infrastructure and then remain on the network by adapting to their environment. AI-powered cyber-attacks can be triggered by both geolocation and voice recognition.

Researchers at Oxford and Cambridge universities have also warned that AI can hack drones, autonomous vehicles and even aircraft, all of which have become more automated. 

Increasingly, protection from AI cyber-attacks will be the responsibility of security tools and programmes, which will in turn be driven by AI. However, while AI can help with the detection of previously undetectable attacks, human supervision and intervention is still required.

Doomsday scenarios: protection if it goes wrong

AI is not an application or an enhancement: it is a sea-change, a profound shift in the way that industry, science and commerce operate. There is nothing new about the fears surrounding AI: similar concerns about overreliance on computers and machine learning have been around for decades. What is new, however, is the growing acknowledgement of the need for a fit-for-purpose stewardship framework to look at ESG and sustainability issues in AI.

MIT Media Lab, a university research laboratory, has used self-driving cars to illustrate the moral dilemmas inherent in the human-machine nexus. It posed the following question to its moral machine, a platform that collates opinions on moral conundrums for machine intelligence:

"Picture a driverless car cruising down the street. Suddenly, three pedestrians run in front of it. The brakes fail and the car is about to hit and kill all of them. The only way out is if the car crosses to the other lane and swerves into a barrier. But that would kill the passenger it’s carrying. What should the self-driving car do?

Would you change your answer if you knew that the three pedestrians are a male doctor, a female doctor, and their dog, while the passenger is a female athlete? Does it matter if the three pedestrians are jaywalking?"21

Moral judgements – which are themselves subjective – will need to be factored into AI-controlled systems if they are to avoid harming humans. The question of how these ethical decisions are made, and who will ultimately take responsibility, goes to the heart of ESG investment considerations surrounding AI. In the absence of adequate safeguarding protocols, AI has the potential to both expand and alter the character of existing threats, as well as introduce new ones.

Paul Nemitz, a principle architect of the EU’s General Data Protection Regulation (GDPR), identifies several sources of AI risk that need to be addressed as a matter of urgency.22 These derive from the concentration of digital power in too few hands which can lead to monopoly control in several areas: the infrastructure for public discourse, the collection of personal data and people profiling and domination of AI investment, most of it in black-box technology.

Implications for governance must also be considered. Regulators often lack the technical expertise to inspect complex algorithms, especially if the development process is improperly documented or there are persistent, system-wide gaps in governance. The Cambridge Analytica and Panama Papers23 scandals are examples of the regulatory shortcomings of both international and privacy law when it comes to AI and good governance.

Cambridge Analytical is a particularly egregious example. In early 2018, the political-consulting firm was reported to have improperly shared data from 87m Facebook profiles.24 Since then there has been growing pressure for regulators to impose the same level of scrutiny on Facebook that media companies currently face.

Even nominally innocent cases can pose moral quandaries. DeepMind came head-to-head with ethical issues when it partnered with the NHS to develop apps and AI research, procuring large amounts of sensitive public-health data in the process. It was reported that one of the hospitals working with Google DeepMind Health, the Royal Free NHS Foundation Trust, had breached the Data Protection Act by handing over the personal data of 1.6m people.25 The​ resulting​ ​backlash​ ​and​ ​government​ ​censure reveal the tensions surrounding AI’s appetite for vast amounts of personal  data.

Humans should be able to regulate, control and monitor how AI is being developed, integrated and upgraded. But complicated, inscrutable black-box AI makes that harder. Many policymakers use information provided by AI when developing regulation; the fact that its algorithms are beyond the comprehension of many people has the potential to lead to conflicts of interest.

But the black-box argument is seen by some as a deliberate obfuscation. In a recent paper for the Royal Society, Joshua A. Kroll argues that algorithms are ‘fundamentally understandable pieces of technology’. He contends that software systems are meant to interact with the world in a controlled fashion and that technologies can always be understood at a higher level, claiming that no system is completely inscrutable.26 

Protecting customer data from theft and loss by securing IT infrastructure is becoming increasingly challenging. Companies and countries are tackling it through a combination of approaches including the implementation of new security and privacy laws, tougher enforcement and harsher penalties.

But perhaps the best protection will stem from the fact that future deployment of AI is likely to be in the form of collaboration with humans – a partnership, rather than a replacement. Research into 1,500 companies demonstrated that the greatest performance improvements were achieved through combining machine and human intelligence. Mercedes-Benz’s use of AI-enabled robots at its Stuttgart factory is one example.27 Workers on the production line use lightweight robots, originally designed for use in outer space, as a ‘third hand’. While human workers still control the assembly process, they can deploy the robot for sensitive, precise, repetitive and tiring tasks such as handling items overhead.

This trend is also visible in the current hiring practices of some tech firms which are now recruiting artists, philosophers and journalists, as well as data scientists and computer-design specialists, in a bid to facilitate an optimised working approach that combines both human and machine.

The new sheriff in town: regulation and governance

The incumbent leaders in AI currently enjoy a self-reinforcing position of monopolistic dominance that protects and enhances their competitiveness and profitability, but which is not in the best interests of consumers, investors or society. Such an asymmetry will not endure and is already meeting with an increasingly robust regulatory response aimed at providing transparency around how algorithmic decisions are reached.

Investors who integrate ESG risks into their decisions have three possible avenues open to them. Refusing to invest in companies that deploy AI would restrict the universe of potential companies to invest in, given the prevalence of the technology. Ignoring the issue is clearly not an option, given the centrality of ESG-related issues in the AI industry. It follows, therefore, that engagement with AI-related ESG criteria is the only viable approach.

Precedents already exist for the effective regulation and governance of AI. There is now is a need to ensure that key oversight functions keep pace with the risks created by rapid technological advancements.

The UK’s Financial Conduct Authority (FCA) is an example of how a regulator has successfully introduced oversight of algorithmic and high-frequency trading, in line with MiFID II recommendations. The FCA’s requirements for algorithmic trading include: demonstrating sufficient resilience to deal with peak order volumes, being able to cope in stressed market conditions, preventing market crashes or disorderly trading conditions due to submission of erroneous trades and carrying out appropriate testing of algorithms.28

The fact that regulatory authorities have experience monitoring algorithms directly contradicts assertions that they are ungovernable. The FCA example suggests that institutional oversight is not only necessary and desirable but also increasingly feasible.

Self-regulation is sometimes posited as an alternative to official oversight. Already, some industries are attempting corrective measures. OpenAI, a research institute co-founded by Tesla’s Elon Musk, aims to spend more than $1bn in a bid to influence the development of AGI in a positive direction.29 Google, Amazon, Facebook, IBM, and Microsoft have formed The Partnership on AI, which has similar goals.

The very nature of AI also means that it can provide its own solutions to governance and regulation. Predictive analytics and enhanced decision-making tools have the potential to greatly improve the identification, risk management and benchmarking of corporate ESG and sustainability issues.

Fund managers are now using these tools to determine if companies’ disclosures on ESG factors constitute ‘greenwashing’. In other words, investors can assess if they have made unsubstantiated claims about the environmental benefits of their practices and products. Investors can also use AI to sift through vast amounts of information which can often be opaque and ultimately meaningless. Some ESG ratings systems produce up to 300 data points for a single company, covering issues from carbon emissions to gender-pay profiles, which can easily overwhelm more traditional ESG rating systems. AI can help navigate through any irrelevant material.

The lasting impact of the Cambridge Analytica scandal may well be that there is now a widespread acknowledgement of the need for independent AI regulation and a compulsory code of ethics for companies. When Lord Clement-Jones, former chair of the House of Lords select committee on AI, released a report on the industry, he acknowledged that the ‘principles do come to life a little bit when you think about the Cambridge Analytica situation’.30 Lord Clement-Jones also contributed to our paper from earlier this year, Investors' expectations on responsible artificial intelligence and data governance.

The House of Lords report recommended several ethical principles that should form the basis of AI regulation: that AI should be developed for the common good and benefit of humanity, that it should operate on principles of intelligibility and fairness, and that it should not diminish the data rights or privacy of individuals, families or communities.

Furthermore, it stated the right of all citizens to be educated to enable them to flourish mentally, emotionally and economically alongside AI. And finally, it stressed that autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

AI is a reality. The investment opportunities it creates, especially in data-rich industries such as healthcare, finance and ecommerce, are enticing given the vast sums that are being allocated to research, development and commercialisation of AI in these areas.

AI has the potential to be harnessed for good or for ill. At Hermes, we see it as a positive challenge to be addressed by through stewardship and ESG analysis. As investors and engagers, we have the opportunity to influence the development and application of AI so that it benefits businesses, minimises risk to society and ultimately serves humanity.

  1. 1‘Putting artificial intelligence to work’, published by Boston Consulting Group on 28 September 2017.
  2. 2‘Worldwide artificial intelligence spending guide’ published by the International Data Corporation in March 2018.
  3. 3‘Augmented finance and machine intelligence’, published by Autonomous Research.
  4. 4‘How AI boosts industry profits and innovation’, published by Accenture in June 2017.
  5. 5‘Artificial Intelligence in Healthcare’, published by the Academy of Medical Royal Colleges in January 2019.
  6. 6‘AI early diagnosis could save heart and cancer patients’, published by BBC News on 2 January 2018.
  7. 7‘The NHS long-term plan: lessons from the Lord Darzi review of health and care’, published by the Institute for Public Policy Research on 20 November 2018.
  8. 8‘Healthcare start-up says AI can diagnose patients better than humans can, doctors call that dubious”’, published by CNBC on 28 June 2018.
  9. 9‘10 reasons why AI-powered, automated customer service is the future’, published by IBM on 16 October 2017.
  10. 10‘How AI boosts industry profits and innovation’, published by Accenture in June 2017.
  11. 11‘Unlocking Growth in CPG with AI and Advanced analytics’, published by Boston Consulting Group on 15 October 2018.
  12. 12‘The new physics of financial services’, published by the World Economic Forum in August 2018.
  13. 13‘2018 Amazon annual report’, published on 1 February 2019 and ‘2018 Alphabet form 10-K’, published on 2 April 2019.
  14. 14‘Microsoft pulls Twitter bot Tay after racist tweets’, published by the Financial Times on 24 March 2016.
  15. 15'Tay: Microsoft issues apology over racist chatbot fiasco', published by the BBC on 25 March 2016.
  16. 16‘The societal impact of AI in financial services’, published by Deloitte on 1 November 2018.
  17. 17‘Artificial intelligence and machine learning in financial services’, published by the Financial Stability Board on 1 November 2017.
  18. 18Journal of the American College of Radiology, published in March 2018.
  19. 192017 Norton cyber security insights report, published by Symantec in January 2018.
  20. 20‘The global state of information security survey’, published by PwC in October 2017.
  21. 21'Your (future car's moral compass', published by MIT Media Lab on 11 February 2019.
  22. 22‘Constitutional democracy and technology in the age of artificial intelligence’, published by the Royal Society on 15 October 2018.
  23. 23Journalists were able to use open-source data-mining technology to navigate 11.5m documents – consisting of 2.6 terabytes of data – that had been released by a Panama-based law firm.
  24. 24'Facebook scandal "hit 87 million users", published by the BBC on 4 April 2018.
  25. 25‘Royal Free breached UK data law in 1.6m patient deal with Google’s DeepMind’, published by the Guardian on 3 July 2017.
  26. 26‘The fallacy of inscrutability’, published by the Royal Society on 15 October 2018.
  27. 27‘Collaborative intelligence: humans and AI are joining forces’, published by the Harvard Business Review in July 2018.
  28. 28‘Algorithmic and high-frequency trading requirements’, published by the FCA in February 2018.
  29. 29‘New AI fake text generator may be too dangerous to release, say creators’, published by the Guardian on 14 February 2019.
  30. 30‘Cambridge Analytical scandal “highlights need for AI regulation”’, published by the Guardian on 16 April 2018.

More Insights

The Circular Q3 2019: keeping you in the sustainability loop
Our key analysis and insights about sustainable investment.
Counting the business cost of deforestation
The risk of brand damage and consumer boycotts from poor environmental corporate practice has never been higher. Some companies are changing their ways, others could do more.
Hermes: An opportunity for change in the accounting sector – a need for reinvention
There is usually a silver lining to every dark cloud, and the one hanging over the UK accounting sector is no exception, says Leon Kamhi, Head of Responsibility at Hermes Investment Management.
The Circular Q2 2019: keeping you in the sustainability loop
Our key analysis and insights about sustainable investment.
Pricing ESG risk in sovereign credit
Building on our studies showing a strong relationship between the environmental, social and governance performance...
The Circular: keeping you in the sustainability loop
Our key analysis and insights about sustainable investment.