Ahead of Alphabet’s Annual Meeting of Stockholders on Wednesday 19 June, Dr Christine Chow, Director of Hermes EOS, Hermes Investment Management, highlights the issues she will be raising when she addresses the event, including calling on the company to strengthen Board oversight in its use of artificial intelligence (A.I.).
Dr Christine Chow explained:
“The power Alphabet possesses has never been greater, and its responsibilities have never been heavier. Investors are looking to the company and its Board to display leadership in the responsible use of A.I. and the minimisation of societal risks.”
Hermes EOS, whose clients together have over $5.7 billion invested in Alphabet, strongly encourages the company to fully embrace the opportunity it has to revolutionise the responsible development and use of A.I. and set industry standards. To support this endeavour and to increase stakeholder transparency in this critical area, Hermes EOS is asking Alphabet to:
- Establish a Societal Risk Oversight Committee of the Board
- Improve the internal governance structure overseeing A.I. technologies to harness employee / stakeholder ethical insights
- Regularly monitor and report on the human rights impact for content reviewers and provide sufficient support to staff and contractors
Dr Chow acknowledges the steps Alphabet’s Google has taken to improve responsible A.I. These include publishing a set of principles , a white paper , introducing machine learning fairness education , and even pre-announcing search algorithm changes for the first time .
The company is also strengthening interpretability , defined as the degree to which people can understand the cause of a decision made . However, when expert opinions and human judgement are introduced into A.I.’s non-linear systems , unconscious bias is not necessarily resolved and may even increase, without careful monitoring and oversight.
Board Oversight for Societal Risks
At the Alphabet Annual Meeting of Stockholders, Dr Christine Chow will speak in support of Stockholder Proposal 6 regarding the establishment of an independent Societal Risk Oversight Committee of the Board “to assess the potential societal consequences of the company’s products and services and should offer guidance on strategic decisions.”
Explaining the decision to support this, Dr Chow said:
“We have long been concerned about public access to violent or extremist online content, which was sadly highlighted by the terrorist attack in Christchurch. The establishment of this Committee will ensure that the company’s technology and its impact on society is considered and focused on at the very top of the organisation. The publication of our Responsible A.I. and data governance white paper outlines our expectations on A.I. governance.
“In our view, there is currently a gap in the necessary skills on the Board to provide the required societal risk oversight. We ask the Board to consider director candidates with experience in statistical analysis, neuroscience and social sciences to ensure the probabilistic nature of A.I. systems is adequately explained and the social impact of technology is properly considered. To the extent sufficient expertise is not present on the Board, this Committee should consider convening an advisory Board of external stakeholders to access the necessary expertise to oversee the complex risks associated with A.I. The short-lived Advanced Technology External Advisory Council teaches us that candidate selection for the Committee should be transparent.
“In addition, we are concerned that the Audit Committee’s mandate includes the social impact of technology . We consider this Committee to be fully occupied with audit issues and believe it would therefore not have enough time for material non-audit risks.”
Ethical consciousness of employees and stakeholders can guide implementation
Active employee movements can help Alphabet to address controversial issues such as sexual harassment, gender inequality and workplace practices. Hermes EOS believes that the ethical consciousness of employees is a real asset to the company. Currently it is unclear how wide-ranging feedback is incorporated into the current internal governance structure.
We recommend a formal and inclusive feedback system from employees as well as other stakeholders from the A.I. ecosystem – covering contract developers and test users to ensure that technology deployment is subject to robust product design and impact assessment.
Human rights impact on the front line
In addition to the machine-led monitoring, Alphabet employs thousands of human moderators on the ‘front line’ who are required to review sensitive, disturbing or violent content and make content assessment decisions.
Dr Chow explains:
“We call on Alphabet to give greater disclosure on the working practices and support, both psychologically and financially, that is given to staff and contractors globally, not only in the US, in these demanding roles. Whilst Alphabet is, in some ways, an exemplary employer, the human rights impact of jobs of this nature potentially, and inadvertently, exposes the company to risks. The company therefore needs to review the level of support given to staff and contractors to ensure it is sufficient.
“There are clearly many areas of concern for investors with Alphabet’s use of artificial intelligence. While we have seen the company make progress in some areas, we encourage the Board to be accountable for the responsible use of A.I. including its impact on society and establishing internal governance mechanisms. If Alphabet cannot do it, with all the resources and intellectual capital at its disposal, investors will question whether any company can.”