Machine learning models are increasingly used to inform high stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at a systematic advantage and certain unprivileged groups at a systematic disadvantage.
In this hack session, we will discuss the concepts and capabilities of a model to test for biases and explanations. The wider spectrum of explainability methods, notably data explanations, metrics and persona specific explanations will be briefed.
Structure of the hack session
- Implement an ML model using a Financial dataset and fit it with known algorithms like Random Forest and evaluate the Model
- Persist the Model for future reference
- Setup a datamart and bind it with the ML engines
- Score the Model so that the monitors are configured
- Enable quality monitoring, feedback logging, and fairness monitoring
- Through historical performance metrics identify the transactions for Explainability
- Have a visual and text view of bias and model explanation
- Open discussion to implement the above flow for another dataset
Key Takeaways from the Hack Session
- Capability to implement a similar solution for a different dataset
- Reference notebook to be shared
- A new dataset for exploration and the implementation guide will be given
Check out the video below to know more about the session.