Note: This is a replay of a highly rated session from the June Spark + AI Summit. Enjoy!
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them. In this session, Francesca will go over a few methods and tools that enable you to “unpack” machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open source fairness and interpretability packages, attendees will learn how to:
Speaker: Francesca Lazzeri
– Hi, everyone. Thank you so much for joining my session. The importance of model fairness and the interpretability in AI system. I’m a Francesca Lazzeri. I’m a senior cloud advocate at Microsoft where I lead a team of data scientists, AI developers, to build a end-to-end solution on Azure. Specifically during this session we’re going to talk about how you as a data scientist or a developer can build a end-to-end responsible machine learning solution. We have been writing numerous article around this important topic. The latest article that you can find is called “The Machine Fairness”, I put a link there so you can check it out. So there are also many resources that we’re going to talk about during this session. Some of the resources are in open sources, so you can see that we put several different such as the InterpertML GitHub, but also the FairlearnAI GitHub . So you can check it out, all of these links and resources, after this session. And again, if you wanted to talk more feel free to ask me questions. So the agenda for today is going to be divided in three main parts. So we have, the first part that we’re going to talk about, what’s the responsible AI, what we mean the responsible artificial intelligence, and then we’re going to look at the two main packages. One is called the InterpretML package and the second one is called the Fairlearn Toolkit. As you know, we are in a moment to help history where we are all leveraging data in order to making significant decisions that really are going to affect digital lives, and different type of the means such as healthcare, justice, finance, education, marketing, but also HR employment. So it is very important for us and specifically for our customers to ensure the safe, ethical, and responsible use of the artificial intelligence. We know that AI has the potential to drive bigger changes in the way we do business. And on that other side, like all great technological innovation, the past is also important that, to keep in mind that this type of technological innovation is going to have a very broad impact on society as well. For all these reason we think that is important when you build AI solutions and machine learning algorithms to ask yourself the following question, so how can I as a data scientist and developer design build and use AI systems that create positive impact on individuals and society? Another important question for example is, how do we best ensure that AI is safe and also reliable? How can we attain that the benefit of a high, but also respecting privacy? So these are all open questions that developers and data scientists have, and at Microsoft, we are building different tools that you can use and leverage in your existing solution, or to build a new solution in order to assess the fairness level of your machine learning application. Moreover, we also are seeing that many customers are facing the same challenge. There are many different reports here. I’m putting a recent report that showed that in nine out of 10 organization, we know that they are facing ethical issues in the implementation of AI systems. So this organization cited actually different reasons, including the pressure to quickly implement AI, the failure to consider ethics when implementing the AI system, and also the lack of resources dedicated to ethical AI systems. At Microsoft we build this system that is a sort of foundation to guide our thinking as a data scientist. And we define six ethical principles that AI system should follow. So as you can see from this slide, the first four are fairness, reliability, and safety, privacy and security, and finally we have inclusiveness. So these are really key properties that each AI system should achieve. The second two are transparency and accountability. These are somehow under linked to all of the other principles and guide how we design implement and operationalize the AI system. So let’s see what we mean by transparency and how we can implement it. So let’s start with transparency. What do we mean actually with transparency? So we mean two main things. The first one is that AI system should be explainable. And also that AI system should have algorithms, that are accountable, meaning that you can actually understand why they are producing specific results. There are a few cases of the machine learning interpretability. We can define them in following two different categories. As you can see, there is the first one that is a model designers evaluation. So this is more at the training time. And the second one is end user or providers of a solution to end users. So these are at the inferencing time. This is more of the time, where they are consumed your AI applications. So there are many different use cases that are important to keep in mind for both categories. Like data scientists needed to explain the output of a model to stakeholders. Usually these are business users and also clients in order to build that trust. Another very important and popular, I would say use case is when data scientists need the tools to verify if a model behavior matches the pre-declared objectives. Finally, also data scientists need the tools to ensure the fairness of their trained model. Other application and other use cases that are more like from the inferencing time category are when again, your AI predictions. So the results that you get from your AI application need to be explained at the inferencing time. Some of the most popular cases are in the healthcare and finance industry. So like why a model classified Fabio, like a customer ID, at risk for colon cancer. And other important question from the finance industry, that we receive very often is why a specific client, in this case we call her Rosine, was denied a mortgage loan, or why his investment portfolio carries a higher risk. So these are all questions that somehow you have to know how to answer. And specifically you have to know why you got the specific results. So that’s why at Microsoft we develop the Interpretability Toolkit. So here is a toolkit that really helps a data scientist to interpret and explain their model. We put together this toolkit in order to explain the machine learning models globally, meaning on all the data or locally, on a specific data point using really state-of-art technology in a very easy to use way. Second, we wanted to incorporate the cutting-edge interpretability solution that are developed by Microsoft, but also leverage all the open source community solution. So this aspect is very important. And finally, we were able to create a common API and also data structure across the integrated libraries and integrate these with Azure services. InterpretML is a user toolkit that you can find at aka.ms/interpretMl-toolkit. Really gives you access to the stateofart interpretability techniques through an open, unified DPI, and also provides you a lot of visualization that you can use in order to understand better why your model is predicting specific results. So with this toolkit, you can understand the model. So using the wide range of the explainers and techniques, using the really type of interactive type of visuals, you can also choose algorithms and experiment with different combination of the algorithms. You can also explore, as a data scientist, different model attributes, such as, for example, if you are more interested in the performance or the global and local features, and you can compare again the different models, multiple models at the same time. So also this is very nice. In order to find more information, you can, again, look at this GitHub repo and also remember that you can run what if analysis as you manipulate the data and view the impact also of your model. So why did we start this project? In the GitHub repo, you will see that the interpret community was able to extend interpret, open source Python package from Microsoft Research that was used to train interpretable models and helping also to explain black box system and in just a few minutes we are going to see also what we mean, with the black box system. And so this was ASO. The interpret community was able to extend this interpret capability with additional interpretability techniques and also utility function to handle also, I would say, the real world data sets and workflow. So we’ve used package, you can train interpretable last box model and explain a black box system. And also you can use these packages to understand your model global behavior, or to understand the reason behind each individual prediction. As you can see, Azure machine learning is a sort of wrapper. So we call it AzureML Interpret, and it’s really a wrapper because it helps you save explanation, and run history, remote and parallel computing or explanation on AzureML computers. So this is an additional capabilities that AzureML can offer for you. And also user able to create a scoring explainer for you, and most importantly, if you want to push your modeling to production, can these explainers for you. In the GitHub repo, we will also see that there is what we call the interpreter-text builds on interpret. We have added extension to support text model. There are two different type of models that are supported, as you can see, there is the, what we call the glass box explanation. So these are, for example, explainable molnar, linear models, decision tree rule systems. And we have also black box explanation, like LIME, SHAP, partial dependents, sensitivity analysis. The black box models are challenging in order to understand for example, big neural networks. So black box explainers can analyze the relationship between input features and output predictions to interpret models. So as I mentioned in my previous slide, some of the examples can include the LIME and SHAP as well. So talking about the SHAP. Let’s take a closer look at it. So SHAP is a game theory approach to explain the output of any machine learning models. So it connects the optimal critical location with the local explanations using what we call the classic Shapley values from the game theory, and also their related extension. So let’s see together how we can actually apply SHAP to a real sample machine learning use-case. So let’s consider a black box that predicts price of a condo or an apartment based on all these features. So as you can see, there is proximity to a green area, such as a park, and also the fact that the building is held as a pet friendly or not. In this case, the feature is a negative. So with this in mind, with these features, our model predicts to the average cost of the apartment. The average price of the apartment is 300K. How much has each of these feature contributed to the prediction compared to the average prediction? As you can see, we have different information, such as the house price prediction. That is about 300,000 euro. We have an average house price prediction for all the apartments and this is about 310 euros. So the Delta here is negative. It’s minus 10,000K. So these are the official values. So as you can see, we have a different feature. So let’s start with the parks. So how the parks contributed to these results. So we have plus 10K. Then, we have the fact that the cats are banned, also contributed in a negative way, 50K. So the fact that the building is not pet friendly. Also the size of the apartment is a very important feature in this case. And we see that he contributed actually to 10K. And then we also see that there is a final feature that is the fact that the apartment is on the second floor. They had zero net contribution. So these final attributes, this final feature, actually, was not really impacting our model result. So how did calculate, or how did, actually Shapley calculate all these values? So we will take features of interest, for example, cats banned, and we will move it from the feature set. Second, we take the remaining features and we generate all possible coalitions. And finally, we add and remove your feature or interest to each of these coalition, and we calculate the difference that it makes. So this is really how SHAP works. So this is a really the logic that is behind SHAP. Of course, there are some pros and cons. So that is important to keep in mind when you decide to use SHAP. For example, a SHAP is great because it is based on a solid theory and also distributes the effect in a very fair way. However, on the other side it produce also contrastive explanation, what they call explanation that sometimes instead of comparing a prediction to an average prediction of the entire data set, you could compare it to a subset, or even to a single data point. But in terms of what are the cons for SHAP, competition time is possible that, for example, you can use a 2K or less possible coalition of the feature values for K features. Sometimes it’s difficult to understand it, so it can be misinterpreted. And finally, the inclusion of unrealistic data instance when the features are related is also very possible. So at least you should keep in mind if, when you decided to use SHAP. So as I say, there are also different models that you can use. There are different also interpretability approaches based on how you want to use these different models. So in terms of the Glassbox models, these are models that are interpretable due to their structural example are explainable, boosting the machines, linear models, and also decision trees. Glassbox models use less explanations and are editable by domain experts, which is something very, very nice to have when you want to use and leverage these Glassbox type models. So in terms of GLM, so this is a Generalized Linear Model. As you can see this is a flexible, generalization of an ordinary linear regression that allows for response viable that have arrow distribution model other than normal distribution. So as you can see, the main characteristic of a Generalized Linear Model is that are the current standard for interpretable models and also that they learn an additive relationship between data and response. Another sample is the Explainable Boosting Machine. So here you can call them also EBM. So this is a sort of interpretable model that has been developed by Microsoft research. It is a very interesting model because it uses a modern machine learning techniques like banking, gradient boosting, and also domain interaction detection to improve the traditional generalized team model. This is why, actually, the Expendable Boosting Machine are very accurate and they are considered very good techniques, like, for example, the random forest and also gradient-boosted trees. So in this second part of the presentation, we are going actually to focus on fairness. We are going to see, what are the different fairness principles which aims to tackle the question on how we can ensure that AI system treats everyone in fair ways. Fairness has a main goal to provide more positive outcomes and avoiding the harmful outcomes of the AI systems for different groups of people. There are different types of harm, as you can see from these slides. Roughly speaking, I would say that we developed these different types of harms based on the taxonomy that Microsoft research created. And there are five different types of harm that you can see in a machine learning system. While I have the definition of all of them in these slides, for the scope of this project, we’d actually just focus on the first two of them. That are allocation, as you can see. This is the harms that can occur when an AI system extends, or I would say, withholds an opportunity or resources for information to specific groups of people. And then we have the other one which is quality of service. This is whether a system works as well for one person as it does for another person. So, the example of if there’s a connection from many different applications is probably one of the most important examples of the quality of the service. For these fairness parts, Microsoft developed a new toolkit that is called Fairlearn. This is a new approach to measuring and mitigating unfairness in systems that make predictions, serve users, or make decisions about allocating resources, opportunities, or information. There are many ways that AI systems can behave unfairly. For example, AI can, back to the quality of service, which is again whether a system works as well for one person as it does for another. And also, AI can also impact location, which is again the harm that occurs when AI system extends or withholds opportunities, resources, or information to specific groups of peoples. So, as you can see in the toolkit, that again you find more at aka.ms/FairlearnAI. In this toolkit, there are different type of focuses and different type of, I would say, capabilities. The main goal of this toolkit is to empower developers of the artificial intelligence systems to assess their system fairness and also mitigate and observe the fairness issues. Most importantly, it helps user identify and mitigate unfairness in their machine learning model, so with a focus on group fairness. So now, let’s jump actually on a demo. I want to show you how you can use the interpretability toolkit. So in this demo, we’re going to see how you can use the interpretability toolkit for tabular data in Azure Databricks. As you can see, we’re going to see what are the toolkit that you can use and download for the explanation result, from explanation experiment, and also visualize the feature, the feature importance. Implementing advanced analytics solution in every organization and for each of our customers this, I would say four different step process. So you first need to ingest the data from different variety of data sources including batch and streaming data. And as you can see, there are different options here as this architecture shows us. And then the most important part is that, of course you need to take in and store the separate data that’s being ingested, regardless of the data volumes, variety and the velocity. Here you can do it of course, with different type of products. When you get into the prep and train stage, you can use again, Azure Databricks just to train and deploy to your model. So as you can see, in Databricks we have an option that is called the Runtime ML that includes a variety of popular ML libraries. The libraries are updated with each reusing to include the like new features. So Databricks as the, there is new subset of the supported library has a sort of the top tie your libraries. For these libraries Azure Databricks provides faster updated cabins updating to the latest package of releases with each Runtime release. So this is a very good for data scientist as well. In terms of dataset, so we are going to use this Breast Cancer Wisconsin dataset, which is a public data set. Here you can see that there are different attributes that we’re going to use for this demo. And not only in terms of the ID number and diagnostics which are probably the most important attributes, but also there are real value features there that are computed for each cell nucleus that we’re going to analyze it for this specific demo. So first of all, you need to… Next slide. First of all, you need to install Azureml-interpret and Azureml-contrib-interpret packages. Next, you need to train a sample model in a local Jupyter notebook. As you can see, you can again use a breast cancer data set and then you can split the data into train and the test. Third, you can call the explainer locally. Here, you need to initialize an explainer objective, pass your model, and then do some training data to the explainer constructor. In order to make explanation initialization more informative you can also choose to pass in the feature names. And I’ll put class data names. For example, if you’re doing classification. This code that you see in these lines actually show you how you can initiate and explain their objective with different types of examples. Here specifically, you have the Tabular Explainer and a PFI Explainer in a local environment. Then, if you wanted to explain the entire model behavior you can call what we call the global explanation. So this is going actually to give you sort of utilization that you can, again leverage to understand and interpret better your models. So some of these visualization, again, are produced from your title code just using these packages. And I want just to show you some of the data visualizations that this package can create for you. As you can see, there is here an overall view of the tree model, along with its predictions and explanations. So we have the data exploration. This displays an overview of the data set, along with the prediction values. Then we have the global importance that these aggregates feature important values of individual data points to show the model’s over all top K. These are, of course, configurable type of K. So you can change its number, design important features, and also have some understanding of the underlying model’s overall behavior. Then we have the explanation exploration. So these will demonstrate how a feature affects a change in the model prediction values or also the probability of the prediction values. It’s a very good visualization if you want to show the impact of a feature interaction. Finally, we have the data summary importance. So these are used as different individual features, important values across all the data points to show the distribution of each feature’s impact on the production value. By using these a diagram, you can investigate, for example, in what the direction the feature values affects a prediction value. Another way to understand better what your model is actually doing is by using a local explanation. You can see that here and you can get to the individual feature importance values of the different data points by calling the explanation for individual instance or for a different group of instances. Here we have a different type of visualization that are created. First of all, we have the local importance. This shows the top key important features for an individual prediction. And it’s very helpful when a data scientist wants to illustrate the local behavior of the underlying model on a specific data point. Then, we have the perturbation exploration. This is for club body analysis, as you can see. This observational allows changes to feature values of the selected data points and observers adding changes to the prediction value. Finally, another important visualization that I want to share with you is called the individual conditional expectation. This visualization allows a feature value changes from a minimum value to a maximum value. So it’s very helpful when a data scientist needs to illustrate how the data points prediction changes when a feature changes. Again, this was just an overview of what the interpretability toolkit can do for you, and how you can leverage that on your solutions. I want also to share additional contacts of the product team who work on this toolkit. As you can see, you can find their names and their emails there in case you want also to follow up offline with the product team, again, who put together all these tool kits that I presented today. Again, this is one of the articles that you can use it to learn more and also to find some of the resources that have been used today during this session. And in terms of resources, I just want to share those with you one more time. These are all the packages and the key data report that have been used during this session. And you can also find me on Twitter, GitHub, and Medium. Thank you very much.
Francesca Lazzeri, PhD is an experienced scientist and machine learning practitioner with over 12 years of both academic and industry experience. She is author of the book “Machine Learning for Time Series Forecasting with Python” (Wiley) and many other publications, including technology journals and conferences. Francesca is Adjunct Professor of AI and machine learning at Columbia University and Principal Cloud Advocate Manager at Microsoft, where she leads an international team (across USA, Canada, UK and Russia) of cloud AI developer advocates and engineers, managing a large portfolio of customers in the research/academic/education sector and building intelligent automated solutions on the cloud. Before joining Microsoft, she was a research fellow at Harvard University in the Technology and Operations Management Unit.