Explainable and Responsible AI for Finance
About This Course
Upon completion of this 3-day course, attendees will be exposed to the following topics:
1. Introduction to explainable AI (XAI)
• What is AI interpretability
• Interpretability vs accuracy trade-off
• Ethics in AI/ML
? Case studies with focus on banking and insurance
2. Interpretable models (white box models)
• Explain interpretable models, such as regression, decision tree, feature importance, etc
3. Black-box models: Model agnostic methods
• Global model agnostic explanation methods
• Local model agnostic explanation methods
• Case studies with focus on banking and insurance
• Workshop: Explain the random forest model using LIME
4. Explain Black-box models: Model specific methods and Example based methods
• Explain gradients in neural network
• Explain ML model behavior using examples
• Workshop: Explain the neural network model using LIME and adversarial samples
• Case studies with focus on banking and insurance
5. Python implementation
• Hands-on workshop exercise with LIME. Students working on problem followed with solution discussion.
• Case studies with focus on banking and insurance
6. Fairness in AI/ML (based on principles outlined by MAS)
• System objectives and context:
o What are the business objectives and how is AI used to achieve these?
o Who are the individuals and groups that are considered to be at-risk?
• Examining data and models:
o What are the errors and biases present in the data used
o How are these being mitigated
• Measuring disadvantage
o What are the quantitative estimates of system performance against fairness
o What are the trade-offs between fairness and other objectives
• Justifying the use of personal attributes
o What are the personal attributes being used
o How is the inclusion of personal attributes being justified
• System monitoring and review
o How is the system being monitored to prevent abnormal behavior
• Case studies with focus on banking and insurance
7. Accountability in AI
• Review of accountability framework for AI models
• Auditability of AI models
o Reproducibility to understand what if anything went wrong, who was responsible, and who should ensure it is corrected
• Some tools and approaches to help in auditability (MLOps tools such as MLflow can help in model and data management & traceability)
8. Security, Privacy, and governance in AI / ML
• Introduction to federated machine learning and analytics
• Intentional and unintentional failures in ML
• Introduction to adversarial machine learning
• Clearly defining who is responsible for data, output, and decisions
What You'll Learn
However, lately the AI/ML algorithms are showing tremendous potential in terms of better predictability as the computational power of the machines(computers) are increasing tremendously due to the progress in technology. It is also becoming increasingly important to ensure that any AI & ML models used in making decisions that impact the customer follow the principles of fairness, ethics, accountability, explain-ability, privacy & security, and governance. These topics are being studied through the lens of Responsible AI. Most of the top technology companies including Google and Microsoft have outlined their versions of Responsible AI practices and have provided open source toolkits and have invested in research into this important area.
In financial services companies (e.g, banks and insurance companies) members from underwriting, account management, policy management, claims administration, fraud detection, and customer experience management teams frequently develop and/or use AI & ML models to make decisions. This course is intended to enable these members who use AI & ML to adopt responsible AI practices in their decision making.
Entry Requirements
Please see course weblink for more information