Our purpose, culture and history
Explained: our focus on Sustainable Wealth Creation
From day one to the present, Sustainable Wealth Creation has been our sole purpose
We aim to change investment for better, forever, for all
Explained: our focus on Sustainable Wealth Creation
Harnessing investor capital to change companies for good, from within
Shaping the conversation on sustainability
As the financial services sector has introduced artificial intelligence (AI) to various functions such as chatbots, hiring and lending, some controversies have arisen. This has prompted a debate about how AI is being deployed, touching on concerns around fairness, discrimination, and transparency. At the same time, regulators and policymakers around the globe are calling for greater AI governance. In this discussion, we will explore some of these issues.
Janet Wong, EOS: Why are AI and data governance important to financial institutions?
Dr David Hardoon, Union Bank of the Philippines: It is their fiduciary duty to oversee AI and data governance. Without adequate control and management, AI deployment can be financially material – as regulators have already imposed fines – as well as causing reputational damage.
In my previous role as chief data officer and special adviser (artificial intelligence) to the Monetary Authority of Singapore, I established the Fairness, Ethics, Accountability and Transparency (FEAT) principles, which provided a foundation for AI governance in the financial industry. Working with the FEAT committee, we set out the FEAT principles and best practice for the use of AI and data analytics. For example, for fairness, we said that individuals or groups of individuals should not be systematically disadvantaged through AI and data analytics-driven decisions, unless these decisions can be justified. Other details can be found here.
JW: In your view, what are the key challenges for boards of financial institutions in upholding AI governance? What would be your recommended solutions?
DH: AI governance touches on numerous technical and non-technical areas. In my view, there are two key challenges. First, AI may reveal core social and cultural dilemmas. There is no universal definition of fairness, as the perception of fairness depends on the context and varies across culture and geographies. For example, the definitions and classifications of marginalised or disadvantaged people vary across North America, Europe and Asia.
There are research studies showing people’s different levels of sensitivity to AI ethical issues, reflected by media coverage in a particular country. For example, Chinese news reports focus more on safety and security, while the UK or the US put greater emphasis on fairness and data privacy1. Therefore, solutions must be pragmatic, realistic, and largely non-technical, and developed in collaboration with the relevant external government agencies.
Second, there is a challenge around providing end-to-end technical AI governance solutions for each application as these need to cover the different governance considerations of data quality, model development, model operationalisation and process governance. My recommendation is, as my mother used to tell me, not to boil the ocean to make a cup of tea. We should incorporate governance solutions as they become available. To address these challenges, at the Union Bank of the Philippines we are taking a pragmatic approach – we are developing common metrics but with different monitoring thresholds, and corrective actions for various applications in different markets. For example, it might be a 5% threshold that triggers further due diligence in Market A, but a 1% threshold in Market B.
JW: One of the actively discussed topics is how to translate AI ethical principles into practice. Can you give us some successful examples?
DH: It is important to have concrete and specific practical applications to begin with. This will allow the application of the principles in a more focused way. For example, what does “being fair” mean when we are trying to prevent fraud? When is it justifiable to be “unfair”? How do these differ when we are determining credit limits, and what does “fair” and “unfair” mean within this context? The specificity allows for practical implementation.
JW: How can financial institutions build internal capacity on AI ethics and governance?
DH: Capacity building would be a combination of training and mandating compliance requirements. Financial institutions can leverage standards developed by organisations such as the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE), if local legislation or regulations are silent.
I have observed different structures adopted by financial institutions to build capacity. Some have established an AI Centre of Excellence, while others take a more decentralised approach. There is no one-size-fits-all solution. Most importantly, the board and senior management need to understand why the structure is in place – is it for research-only purposes, knowledge transfer, or is it genuinely working towards digital transformation in a responsible manner? Setting the right expectations is key to furthering the integration of AI ethics and governance into the existing ethics and risk management framework.
JW: As the senior AI adviser at the Union Bank of the Philippines, can you share more about your work implementing AI safeguards, as well as the development of a set of fairness metrics?
DH: In my current role we have applied the fairness principle to a specific AI application, such as in marketing, fraud detection, and credit scoring, and developed an initial measurement that notifies bank staff to review any occurrence of potential inadvertent disadvantage (“unfairness”). We are taking an incremental approach by putting in place a series of safety nets to capture unfairness.
We are also partnering with Element AI, a global AI software provider, to ensure that we are walking the talk at the group level. Our goal is to publish case studies and a set of fairness metrics once they are available.
JW: Apart from your former role at the regulator, you are also a data scientist, getting your bachelor degree in computer science and AI in 2002 and a PhD in machine learning in 2006. How has the culture of safety and responsibility among AI researchers and data scientists changed in the past two decades?
DH: The culture has definitely shifted. Researchers are not only posing and addressing questions, such as on unfair bias, but have developed entire research areas that did not previously exist. For example, on top of the number of conferences dedicated to the topic of AI ethics, [conference organiser] NeurIPS has introduced a requirement that all paper submissions include a statement of the “potential broader impact of their work, including its ethical aspects and future societal consequences”. There is a deeper appreciation of, and better attempt to understand, AI outcomes and what it means to us as individuals.
Share:
1Standard Human-centred Artificial Intelligence AI Index 2019 report – PDF file
Related insights
Share:
Get the latest insights straight to your inbox
Related Insights