The Most Comprehensive Guide On Explainable AI

Mohamed Last Updated : 30 Oct, 2024
10 min read

In artificial intelligence, at the end of this era, I have been working on the development of this new technology, which is called interpretable artificial intelligence, which works to clarify a new concept, which is how to communicate information better and well to the ordinary person, meaning that it works to make the results more flexible than before. It works on drawing new, easy-to-understand, and easy-to-flex plans to make the average person work to understand these results largely and accurately. In this article we will look at Explainable AI or XAI.

Overview:

  • Dive into Explainable AI, exploring how it makes complex AI models accessible and transparent, bridging the gap between AI decision-making and human understanding.
  • Learn how XAI enhances trust, accountability, and fairness by offering clear insights into AI model predictions, especially in sensitive fields like healthcare and finance.
  • Uncover methods like interpretable data, predictions, and algorithms allow AI systems to explain their processes and rationales in a human-friendly way.
  • Compare transparent, explainable, and interactive AI models to see how each fosters a deeper connection between users and AI outcomes.
  • Gain insights into real-world XAI applications and discover coding frameworks that help integrate transparency in AI models for better analysis and troubleshooting.

This article was published as a part of the Data Science Blogathon.

What is Explainable AI?

Explanatory Artificial Intelligence (XAI) has been created as it is programmed to describe its purpose in explanation and give its accuracy, rationale, and decision-making process in a way that the average human can understand. XAI is often discussed in deep learning and plays an important role in the FAT ML model, where fairness, accountability, and transparency are important in machine learning. XAI provides a lot of information about how the AI program makes a certain decision about something, as it is taken and followed ways to detect it, namely:

  • Strengths and weaknesses of the program used.
  • Establish those specific criteria the program uses to arrive at a specific and accurate decision.
  • Why does the program make a particular decision rather than the alternatives it depends on?
  • Establish what is called the appropriate level of confidence for various types of decisions.
  • What are the types of errors that the program has to clarify and display?

Understanding in Depth

First, we must understand XAI and why this technology is needed. Hence, AI algorithms often act as “black boxes” that provide the output anyway to understand their inner workings. The goal of Xai is to make the rationale behind producing an algorithm that is understandable to the ordinary person unfamiliar with the subject, making him fully aware of that subject. Hence, we can assume that, for example, many AI algorithms are used deep learning in this matter, where the algorithms learn to identify patterns based on the data bloating and the data with large training data. Deep learning is a neural network approach that simulates how the brain of normal human beings operates like ours. As with human thought processes, determining how much a deep learning algorithm has reached a prediction or decision can be difficult or impossible.

Here, decisions about employment and financial services issues such as credit scores and loan approvals are important and worth explaining. However, no one is likely to be physically harmed (at least immediately) if one of those algorithms gives poor results. But there are many examples where the dire consequences are much more than that.

Hence, deep learning algorithms are increasingly important in healthcare application use cases such as cancer screening, where clinicians need to understand the basis for algorithm diagnosis. A false negative can mean that the patient is not receiving life-saving treatment. A false positive, on the other hand, may result in a patient receiving expensive treatment when it is most needed and necessary. The level of explanation is essential for radiologists and oncologists seeking to take full advantage of the growing benefits of AI.

How Does Explainable AI Work?

First, we define and understand what interpretable AI is. Here we will explain how these principles help determine the expected output from XAI, but they do not provide any guidance on how to reach this desired outcome. It may be easy and well to divide XAI into three categories: Where these questions are asked to clarify more and more, and the questions range as follows:

  • Interpretable data: What data was entered in training the model? Why was this date chosen? How was fairness assessed? Has any effort been made to remove bias?
  • Interpretable predictions: What features of the model are activated or used to reach certain outputs?
  • Interpretable algorithms: What individual layers does the model consist of, and how do they lead to the output or prediction?

Interpretable data is the only category that is easy to achieve—at least in principle—in the neural network. Most researchers, or in other words, research leaders, put the most emphasis on achieving interpretable predictions and algorithms.

There are two current ways of interpretation:

  • Proxy modeling: A different type is used where the models are clarified and simplified than using methods such as a decision tree to approximate the actual model. Since it is an approximation, it may differ from the actual model results.
  • Interpretation design: The forms are designed to be easy to explain. This approach risks reducing the predictive power or overall accuracy of the model.

What are the Different Types of XAI?

We can work on the classification of XAI under two types:

These models are simple and fast to implement. Algorithms consist of simple computations that ordinary humans can do themselves. Thus, these models explain, and humans can easily understand how these models arrive at a particular decision.

Interpretable AI provides a direct understanding of how AI calculates the required output. We will mention many rare models, which are types of transparent AI. This is important for building trust between people and algorithms because it helps the user understand how AI works. Interactive AI allows users to interact with AI to work together.

  • Explainable AI: This type of artificial intelligence is explainable, as it is one of the most important types of XAI because it explains how artificial intelligence is made, how it works, and how to explain those parts that these models work on.
  • Transparent AI: This type of transparent artificial intelligence is necessary to build trust between people and the algorithms that work to help people work and get many of the tasks and solutions required.
  • Interactive artificial intelligence: Interactive AI is a type of XAI that allows users to interact with the device.

What are the Features of the Xai Interface?

Features of this interface are included here: XAI interfaces depict the output of different data points to explain the relationships between specific features and model predictions. Where users can observe the x and y values of different data points and understand their effect on the absolute error received from the colour code to understand it. It makes models easier and more precise in the ideas they present to typical people so that they can understand exactly how to interact with the feature that works in this figure. XAI interfaces visualize the output of different data points to explain the relationships between specific features and model predictions.

How does XAI Serve AI?

When we saw that artificial intelligence is more widely used in our daily lives, from here, we went to an important point, which is the ethics of artificial intelligence. However, the increasing complexity of advanced AI models and the lack of easiness raise doubts about these models. Without understanding them, humans cannot decide whether these AI models are socially useful, trustworthy, safe, and fair. Thus, AI models need to follow specific ethical guidelines. Gartner combines the ethics of artificial intelligence into five main components:

  1. Clarity and transparency
  2. Human-centred and socially beneficial
  3. Exhibition.
  4. Safe and secure.
  5. Responsible.

One of Xai’s primary goals is to help AI models serve these five components. Humans need a deep understanding of AI models to determine whether they follow these components or not. Humans cannot trust an AI model if they do not know how it works. By understanding how these models work, humans can decide whether AI models follow all five characteristics.

What are the Advantages?

Here, its advantages will be mentioned as Xai aims to explain how to make specific decisions or recommendations. From there, he explains and helps humans understand why AI behaves in certain ways and builds trust between human models and AI. The important advantages of Explainable AI include the following:

  • Improves explanation and transparency: Companies can better understand organisational models and developments and see why they behave in certain ways under certain conditions. Even if it’s a black model, humans can use an interface of interpretation to understand how these AI models achieve certain conclusions.
  • Faster adoption: As companies can better understand AI models, they can be trusted with more important decisions
  • Debugging optimization: When the system is running unexpectedly, Xai can be used to identify the problem and help developers debug the problem.
  • Enable audit for regulatory requirements.

There are many benefits to including this technology, as there are significant commercial benefits for building interpretability in artificial intelligence systems. In addition to helping address stresses like regulation and many others and adopting good practices around accountability and ethics, there are great benefits to be gained from being ahead and investing in interpretability today.

Implementation

Import library:

Python Code:

# liberary for preper the data
import pandas as pd
import numpy as np
import plotly.figure_factory as ff
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, iplot
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
## library for modelling
from sklearn.model_selection import train_test_split


df=pd.read_csv('heart.csv')
#print(df.head())
#print(df.info())
Implementation| Explainable AI 

Information about data:

df.describe().style.background_gradient(cmap = 'copper')
Implementation| Explainable AI 
df.isna().count()
fig = ff.create_distplot([df.age],['age'],bin_size=5)
iplot(fig, filename='Basic Distplot')

#Get also the QQ-plot
fig = plt.figure()
res = stats.probplot(df['age'], plot=plt)
plt.show()
Implementation 2
Implementation 3
print('Heatmap')
plt.figure(figsize=(15,10))
sns.heatmap(df.corr(),annot=True,cmap='coolwarm')
Implementation 4| Explainable AI 

Using XAI:

!pip install xai
!pip install xai_data
import sys, os
import pandas as pd
import numpy as np
from collections import defaultdict
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.pipeline import make_pipeline

# Use below for charts in dark jupyter theme

THEME_DARK = False

if THEME_DARK:
    # This is used if Jupyter Theme dark is enabled. 
    # The theme chosen can be activated with jupyter theme as follows:
    # >>> jt -t oceans16 -T -nfs 115 -cellw 98% -N  -kl -ofs 11 -altmd
    font_size = '20.0'
    dark_theme_config = {
        "ytick.color" : "w",
        "xtick.color" : "w",
        "text.color": "white",
        'font.size': font_size,
        'axes.titlesize': font_size,
        'axes.labelsize': font_size, 
        'xtick.labelsize': font_size, 
        'ytick.labelsize': font_size, 
        'legend.fontsize': font_size, 
        'figure.titlesize': font_size,
        'figure.figsize': [20, 7],
        'figure.facecolor': "#384151",
        'legend.facecolor': "#384151",
        "axes.labelcolor" : "w",
        "axes.edgecolor" : "w"
    }
    plt.rcParams.update(dark_theme_config)

sys.path.append("..")

import xai
import xai.data
df_groups = xai.imbalance_plot(df, 'age', categorical_cols=categorical_cols)
threshold
proc_df = xai.normalize_numeric(bal_df)
proc_df = xai.convert_categories(proc_df)
x = df.drop("output", axis=1)
y = df["output"]
x_train, y_train, x_test, y_test, train_idx, test_idx = 
    xai.balanced_train_test_split(
            x, y, "age", 
            min_per_group=1,
            max_per_group=1,
            categorical_cols=categorical_cols)
import sklearn
from sklearn.metrics import classification_report, mean_squared_error, roc_curve, auc

from keras.layers import Input, Dense, Flatten, 
    Concatenate, concatenate, Dropout, Lambda
from keras.models import Model, Sequential
from keras.layers.embeddings import Embedding

def build_model(X):
    input_els = []
    encoded_els = []
    dtypes = list(zip(X.dtypes.index, map(str, X.dtypes)))
    for k,dtype in dtypes:
        input_els.append(Input(shape=(1,)))
        if dtype == "int8":
            e = Flatten()(Embedding(X[k].max()+1, 1)(input_els[-1]))
        else:
            e = input_els[-1]
        encoded_els.append(e)
    encoded_els = concatenate(encoded_els)

    layer1 = Dropout(0.5)(Dense(100, activation="relu")(encoded_els))
    out = Dense(1, activation='sigmoid')(layer1)

    # train model
    model = Model(inputs=input_els, outputs=[out])
    model.compile(optimizer="adam", loss='binary_crossentropy', metrics=['accuracy'])
    return model


def f_in(X, m=None):
    """Preprocess input so it can be provided to a function"""
    if m:
        return [X.iloc[:m,i] for i in range(X.shape[1])]
    else:
        return [X.iloc[:,i] for i in range(X.shape[1])]

def f_out(probs, threshold=0.5):
    """Convert probabilities into classes"""
    return list((probs >= threshold).astype(int).T[0])
model = build_model(x_train)

model.fit(f_in(x_train), y_train, epochs=1000, batch_size=512)
Matrix explanation| Explainable AI 
Xai

Conclusion

In this article, we figured out a few things related to the topic of XAI, and this tool has recently become of interest to many researchers, data scientists, and analysts. This process is from the beginning, and from here comes the benefit of this technology; here in our article, we worked on several things, namely the definition of this technology, which, as we mentioned, is new in this field. We got acquainted with the history of the emergence of this technology and the harm that affected its development. We also defined and how to use this technology and its advantages and disadvantages. Ultimately, we applied the code where we fetched the data mentioned and worked on it with some tools. Then, we implemented Xai, which was all mentioned in the above code.

I hope you enjoy this article. We have several main points: We have clarified the absolute concept behind the technology we explained in that article. The second point is understanding the technology and how it works, its different nuclei, and its features, and how it serves that technology. In the end, we implemented the code on which this technology works.

Q1. What is an explainable AI example?

A. Explainable AI (XAI) can be seen in healthcare applications, like when AI models identify potential health conditions. By making the AI’s process clear, such as showing why a specific condition is flagged, XAI helps clinicians trust and verify diagnoses, making it valuable in sensitive, high-stakes fields.

Q2. Is ChatGPT an explainable AI?

A. ChatGPT is not fully explainable AI; it’s a language model focused on generating responses based on vast data without providing insight into how it arrived at specific answers. Explainable AI would require it to justify its responses, an area still under development for complex models like ChatGPT.

Q3. What is the difference between explainable AI and AI?

A. Explainable AI (XAI) adds a layer of transparency, allowing humans to understand and interpret an AI’s decision-making process. In contrast, traditional AI models may produce results without explaining their internal logic, making XAI crucial for applications requiring trust and accountability.

Q4. What is explainable AI in face detection?

A. Explainable AI in face detection aims to clarify why certain faces are detected or recognized by an algorithm. For instance, XAI could identify specific facial features the AI used to verify a person, providing transparency and addressing concerns about bias or accuracy in surveillance applications.

 The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Machine learning engineer, deep learner, and researcher in deep learning, I do scientific research and I am the writer of scientific articles on machine learning in all its articles such as NLP and computer vision.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details