Search this website. You can use fund codes to locate specific funds

Q&A: AI and financial services

In our latest Q&A, Janet Wong, an engager for EOS at Federated Hermes, talks to Dr David Hardoon, a senior AI adviser to the Union Bank of the Philippines, and the former chief data officer and senior AI adviser to the Monetary Authority of Singapore, about AI and data governance in financial services.

As the financial services sector has introduced artificial intelligence (AI) to various functions such as chatbots, hiring and lending, some controversies have arisen. This has prompted a debate about how AI is being deployed, touching on concerns around fairness, discrimination, and transparency. At the same time, regulators and policymakers around the globe are calling for greater AI governance. In this discussion, we will explore some of these issues.

Janet Wong, EOS: Why are AI and data governance important to financial institutions?

Dr David Hardoon, Union Bank of the Philippines: It is their fiduciary duty to oversee AI and data governance. Without adequate control and management, AI deployment can be financially material – as regulators have already imposed fines – as well as causing reputational damage.

In my previous role as chief data officer and special adviser (artificial intelligence) to the Monetary Authority of Singapore, I established the Fairness, Ethics, Accountability and Transparency (FEAT) principles, which provided a foundation for AI governance in the financial industry. Working with the FEAT committee, we set out the FEAT principles and best practice for the use of AI and data analytics. For example, for fairness, we said that individuals or groups of individuals should not be systematically disadvantaged through AI and data analytics-driven decisions, unless these decisions can be justified. Other details can be found here.

JW: In your view, what are the key challenges for boards of financial institutions in upholding AI governance? What would be your recommended solutions?

DH: AI governance touches on numerous technical and non-technical areas. In my view, there are two key challenges. First, AI may reveal core social and cultural dilemmas. There is no universal definition of fairness, as the perception of fairness depends on the context and varies across culture and geographies. For example, the definitions and classifications of marginalised or disadvantaged people vary across North America, Europe and Asia.

There are research studies showing people’s different levels of sensitivity to AI ethical issues, reflected by media coverage in a particular country. For example, Chinese news reports focus more on safety and security, while the UK or the US put greater emphasis on fairness and data privacy1. Therefore, solutions must be pragmatic, realistic, and largely non-technical, and developed in collaboration with the relevant external government agencies.

Second, there is a challenge around providing end-to-end technical AI governance solutions for each application as these need to cover the different governance considerations of data quality, model development, model operationalisation and process governance. My recommendation is, as my mother used to tell me, not to boil the ocean to make a cup of tea. We should incorporate governance solutions as they become available. To address these challenges, at the Union Bank of the Philippines we are taking a pragmatic approach – we are developing common metrics but with different monitoring thresholds, and corrective actions for various applications in different markets. For example, it might be a 5% threshold that triggers further due diligence in Market A, but a 1% threshold in Market B.

JW: One of the actively discussed topics is how to translate AI ethical principles into practice. Can you give us some successful examples?  

DH: It is important to have concrete and specific practical applications to begin with. This will allow the application of the principles in a more focused way. For example, what does “being fair” mean when we are trying to prevent fraud? When is it justifiable to be “unfair”? How do these differ when we are determining credit limits, and what does “fair” and “unfair” mean within this context? The specificity allows for practical implementation.

JW: How can financial institutions build internal capacity on AI ethics and governance?

DH: Capacity building would be a combination of training and mandating compliance requirements. Financial institutions can leverage standards developed by organisations such as the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE), if local legislation or regulations are silent.

I have observed different structures adopted by financial institutions to build capacity. Some have established an AI Centre of Excellence, while others take a more decentralised approach. There is no one-size-fits-all solution. Most importantly, the board and senior management need to understand why the structure is in place – is it for research-only purposes, knowledge transfer, or is it genuinely working towards digital transformation in a responsible manner? Setting the right expectations is key to furthering the integration of AI ethics and governance into the existing ethics and risk management framework.

JW: As the senior AI adviser at the Union Bank of the Philippines, can you share more about your work implementing AI safeguards, as well as the development of a set of fairness metrics?

DH: In my current role we have applied the fairness principle to a specific AI application, such as in marketing, fraud detection, and credit scoring, and developed an initial measurement that notifies bank staff to review any occurrence of potential inadvertent disadvantage (“unfairness”). We are taking an incremental approach by putting in place a series of safety nets to capture unfairness.

We are also partnering with Element AI, a global AI software provider, to ensure that we are walking the talk at the group level. Our goal is to publish case studies and a set of fairness metrics once they are available.

JW: Apart from your former role at the regulator, you are also a data scientist, getting your bachelor degree in computer science and AI in 2002 and a PhD in machine learning in 2006. How has the culture of safety and responsibility among AI researchers and data scientists changed in the past two decades?

DH: The culture has definitely shifted. Researchers are not only posing and addressing questions, such as on unfair bias, but have developed entire research areas that did not previously exist. For example, on top of the number of conferences dedicated to the topic of AI ethics, [conference organiser] NeurIPS has introduced a requirement that all paper submissions include a statement of the “potential broader impact of their work, including its ethical aspects and future societal consequences”. There is a deeper appreciation of, and better attempt to understand, AI outcomes and what it means to us as individuals.

  1. 1Standard Human-centred Artificial Intelligence AI Index 2019 report: https://hai.stanford.edu/sites/default/files/ai_index_2019_report.pdf

Related Insights

Mizuho Financial Group case study
Mizuho has made a commitment to ending financing of new coal-fired power plants and reducing exposure to coal-fired power plants to zero by 2050, the only Japanese bank to makes such a commitment.
A voting season like no other
With many companies opting for virtual shareholder meetings, how did investors ensure their concerns about climate change, diversity and human rights risks were addressed?
Antimicrobial resistance and the challenge for pharmaceutical companies
A lack of research and development into new antibiotic classes compounds the serious threat that antimicrobial resistance poses to public health.
Repsol case study
Repsol became the first oil and gas company to commit to a net-zero goal by 2050, supported by a decarbonisation pathway with interim targets, setting a higher benchmark for the industry. EOS has engaged with the company since 2013 on its climate action and targets.
EOS Partnership with The Enacting Purpose Initiative
EOS at Federated Hermes is continuing to move corporate purpose forward.
Nintendo case study
Nintendo has appointed a woman to its board for the first time in its 130-year history. EOS continues to engage with the company on improving board gender diversity in line with investor expectations, seeking more transformational changes on the executive board and in the talent pipeline over time.

EOS Client Service and Business Development

Amy D’Eugenio,
Head of Client Service and Business Development, EOS