Guan Wang, Senior Data Scientist and Vice President, Group Data Services, Swiss Re Asia Pte. Ltd
Responsible AI and its Principles
Decision systems have long been used in different industries. Banks are using automated credit assessment systems to decide on loan applications. Insurers are using automated underwriting systems to approve insurance policies. Artificial Intelligence, or AI, is simply another system for automating the decision process. What is so special about AI that on one hand it is so powerful while on the other hand people are afraid of losing control of it?
A major character of AI or specifically Machine Learning systems is that it is heavily data-driven. Traditional decision systems can also be data-driven, but data is usually absorbed by domain expert knowledge through research and then used to manually generate rules. Domain experts must do this in a responsible way. Machine Learning however depends heavily on data itself to learn the relationship between the supporting attributes to the target variable and build sometimes very complicated models. While business experts are still there to validate the model results, a whole new level of strategies and techniques are needed to ensure a responsible AI system.
Transparency and Fairness are two aspects both academic and industry are looking intensively for AI systems. This is to address the issue that AI models can be black boxes and difficult for humans to interpret and evaluate their logic. Other principles like Ethics and Accountability are also being discussed and researched, for example in the Veritas project from the Monetary Authority of Singapore together with a group of Financial Institutes.
Standards are Non-Standard
Most will agree on the high-level responsible principles. However, when it comes to detailed definitions and implementations, different industries, markets or even different people will have different opinions. There are many ongoing discussions among the stakeholders, and we do see that a lot of proposed standards are controversial and are non-standard across markets and industries.
While the regulatory coverage on AI systems is still developing, financial institutes must be self-disciplined and hold to the values they believe when developing, deploying, and monitoring AI systems as part of their digital transformation journey. A customized digital governance framework needs to be developed and is always ready to be tweaked and evolved according to the latest regulatory and social requirements.
Model Centric vs Data Centric
The solutions for evaluating AI systems especially on fairness and transparency typically can be model-centric or data-centric.
Model-centric solutions look deeply into the modeling structure and algorithms and try to interpret and explain by analyzing the model intrinsic characteristics. For example, compared to complex neural network models, researchers in Google develop the TF Lattice model structure that is much easier to interpret. Microsoft researchers also develop a “glass box” version of the Explainable Boosting Machine that is accurate as state-of-the-art techniques like random forests and gradient boosted trees but with exact explanations.
Data-centric solutions treat AI models like black boxes and use techniques like Shapley Values to interpret the model and to use quantitative metrics to evaluate the fairness of the model outcome. Data-centric solutions are currently the mainstream approach. And one interesting point we see is that some of the data-centric solutions can be well applied beyond AI to existing automated decision systems or even manual decision processes to evaluate things like fairness.
This is an exciting era of AI applications and an equally exciting time to explore the right thing to do, with responsible principles, for and beyond AI.