In artificial intelligence, at the end of this era, I have been working on the development of this new technology, which is called interpretable artificial intelligence, which works to clarify a new concept, which is how to communicate information better and well to the ordinary person, meaning that it works to make the results more flexible than before. It works on drawing new, easy-to-understand, and easy-to-flex plans to make the average person work to understand these results largely and accurately. In this article we will look at Explainable AI or XAI.
Overview:
This article was published as a part of the Data Science Blogathon.
Explanatory Artificial Intelligence (XAI) has been created as it is programmed to describe its purpose in explanation and give its accuracy, rationale, and decision-making process in a way that the average human can understand. XAI is often discussed in deep learning and plays an important role in the FAT ML model, where fairness, accountability, and transparency are important in machine learning. XAI provides a lot of information about how the AI program makes a certain decision about something, as it is taken and followed ways to detect it, namely:
First, we must understand XAI and why this technology is needed. Hence, AI algorithms often act as “black boxes” that provide the output anyway to understand their inner workings. The goal of Xai is to make the rationale behind producing an algorithm that is understandable to the ordinary person unfamiliar with the subject, making him fully aware of that subject. Hence, we can assume that, for example, many AI algorithms are used deep learning in this matter, where the algorithms learn to identify patterns based on the data bloating and the data with large training data. Deep learning is a neural network approach that simulates how the brain of normal human beings operates like ours. As with human thought processes, determining how much a deep learning algorithm has reached a prediction or decision can be difficult or impossible.
Here, decisions about employment and financial services issues such as credit scores and loan approvals are important and worth explaining. However, no one is likely to be physically harmed (at least immediately) if one of those algorithms gives poor results. But there are many examples where the dire consequences are much more than that.
Hence, deep learning algorithms are increasingly important in healthcare application use cases such as cancer screening, where clinicians need to understand the basis for algorithm diagnosis. A false negative can mean that the patient is not receiving life-saving treatment. A false positive, on the other hand, may result in a patient receiving expensive treatment when it is most needed and necessary. The level of explanation is essential for radiologists and oncologists seeking to take full advantage of the growing benefits of AI.
First, we define and understand what interpretable AI is. Here we will explain how these principles help determine the expected output from XAI, but they do not provide any guidance on how to reach this desired outcome. It may be easy and well to divide XAI into three categories: Where these questions are asked to clarify more and more, and the questions range as follows:
Interpretable data is the only category that is easy to achieve—at least in principle—in the neural network. Most researchers, or in other words, research leaders, put the most emphasis on achieving interpretable predictions and algorithms.
There are two current ways of interpretation:
We can work on the classification of XAI under two types:
These models are simple and fast to implement. Algorithms consist of simple computations that ordinary humans can do themselves. Thus, these models explain, and humans can easily understand how these models arrive at a particular decision.
Interpretable AI provides a direct understanding of how AI calculates the required output. We will mention many rare models, which are types of transparent AI. This is important for building trust between people and algorithms because it helps the user understand how AI works. Interactive AI allows users to interact with AI to work together.
Features of this interface are included here: XAI interfaces depict the output of different data points to explain the relationships between specific features and model predictions. Where users can observe the x and y values of different data points and understand their effect on the absolute error received from the colour code to understand it. It makes models easier and more precise in the ideas they present to typical people so that they can understand exactly how to interact with the feature that works in this figure. XAI interfaces visualize the output of different data points to explain the relationships between specific features and model predictions.
When we saw that artificial intelligence is more widely used in our daily lives, from here, we went to an important point, which is the ethics of artificial intelligence. However, the increasing complexity of advanced AI models and the lack of easiness raise doubts about these models. Without understanding them, humans cannot decide whether these AI models are socially useful, trustworthy, safe, and fair. Thus, AI models need to follow specific ethical guidelines. Gartner combines the ethics of artificial intelligence into five main components:
One of Xai’s primary goals is to help AI models serve these five components. Humans need a deep understanding of AI models to determine whether they follow these components or not. Humans cannot trust an AI model if they do not know how it works. By understanding how these models work, humans can decide whether AI models follow all five characteristics.
Here, its advantages will be mentioned as Xai aims to explain how to make specific decisions or recommendations. From there, he explains and helps humans understand why AI behaves in certain ways and builds trust between human models and AI. The important advantages of Explainable AI include the following:
There are many benefits to including this technology, as there are significant commercial benefits for building interpretability in artificial intelligence systems. In addition to helping address stresses like regulation and many others and adopting good practices around accountability and ethics, there are great benefits to be gained from being ahead and investing in interpretability today.
Import library:
Python Code:
# liberary for preper the data
import pandas as pd
import numpy as np
import plotly.figure_factory as ff
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, iplot
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
## library for modelling
from sklearn.model_selection import train_test_split
df=pd.read_csv('heart.csv')
#print(df.head())
#print(df.info())
Information about data:
df.describe().style.background_gradient(cmap = 'copper')
df.isna().count()
fig = ff.create_distplot([df.age],['age'],bin_size=5)
iplot(fig, filename='Basic Distplot')
#Get also the QQ-plot
fig = plt.figure()
res = stats.probplot(df['age'], plot=plt)
plt.show()
print('Heatmap')
plt.figure(figsize=(15,10))
sns.heatmap(df.corr(),annot=True,cmap='coolwarm')
!pip install xai
!pip install xai_data
import sys, os
import pandas as pd
import numpy as np
from collections import defaultdict
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.pipeline import make_pipeline
# Use below for charts in dark jupyter theme
THEME_DARK = False
if THEME_DARK:
# This is used if Jupyter Theme dark is enabled.
# The theme chosen can be activated with jupyter theme as follows:
# >>> jt -t oceans16 -T -nfs 115 -cellw 98% -N -kl -ofs 11 -altmd
font_size = '20.0'
dark_theme_config = {
"ytick.color" : "w",
"xtick.color" : "w",
"text.color": "white",
'font.size': font_size,
'axes.titlesize': font_size,
'axes.labelsize': font_size,
'xtick.labelsize': font_size,
'ytick.labelsize': font_size,
'legend.fontsize': font_size,
'figure.titlesize': font_size,
'figure.figsize': [20, 7],
'figure.facecolor': "#384151",
'legend.facecolor': "#384151",
"axes.labelcolor" : "w",
"axes.edgecolor" : "w"
}
plt.rcParams.update(dark_theme_config)
sys.path.append("..")
import xai
import xai.data
df_groups = xai.imbalance_plot(df, 'age', categorical_cols=categorical_cols)
proc_df = xai.normalize_numeric(bal_df)
proc_df = xai.convert_categories(proc_df)
x = df.drop("output", axis=1)
y = df["output"]
x_train, y_train, x_test, y_test, train_idx, test_idx =
xai.balanced_train_test_split(
x, y, "age",
min_per_group=1,
max_per_group=1,
categorical_cols=categorical_cols)
import sklearn
from sklearn.metrics import classification_report, mean_squared_error, roc_curve, auc
from keras.layers import Input, Dense, Flatten,
Concatenate, concatenate, Dropout, Lambda
from keras.models import Model, Sequential
from keras.layers.embeddings import Embedding
def build_model(X):
input_els = []
encoded_els = []
dtypes = list(zip(X.dtypes.index, map(str, X.dtypes)))
for k,dtype in dtypes:
input_els.append(Input(shape=(1,)))
if dtype == "int8":
e = Flatten()(Embedding(X[k].max()+1, 1)(input_els[-1]))
else:
e = input_els[-1]
encoded_els.append(e)
encoded_els = concatenate(encoded_els)
layer1 = Dropout(0.5)(Dense(100, activation="relu")(encoded_els))
out = Dense(1, activation='sigmoid')(layer1)
# train model
model = Model(inputs=input_els, outputs=[out])
model.compile(optimizer="adam", loss='binary_crossentropy', metrics=['accuracy'])
return model
def f_in(X, m=None):
"""Preprocess input so it can be provided to a function"""
if m:
return [X.iloc[:m,i] for i in range(X.shape[1])]
else:
return [X.iloc[:,i] for i in range(X.shape[1])]
def f_out(probs, threshold=0.5):
"""Convert probabilities into classes"""
return list((probs >= threshold).astype(int).T[0])
model = build_model(x_train) model.fit(f_in(x_train), y_train, epochs=1000, batch_size=512)
In this article, we figured out a few things related to the topic of XAI, and this tool has recently become of interest to many researchers, data scientists, and analysts. This process is from the beginning, and from here comes the benefit of this technology; here in our article, we worked on several things, namely the definition of this technology, which, as we mentioned, is new in this field. We got acquainted with the history of the emergence of this technology and the harm that affected its development. We also defined and how to use this technology and its advantages and disadvantages. Ultimately, we applied the code where we fetched the data mentioned and worked on it with some tools. Then, we implemented Xai, which was all mentioned in the above code.
I hope you enjoy this article. We have several main points: We have clarified the absolute concept behind the technology we explained in that article. The second point is understanding the technology and how it works, its different nuclei, and its features, and how it serves that technology. In the end, we implemented the code on which this technology works.
A. Explainable AI (XAI) can be seen in healthcare applications, like when AI models identify potential health conditions. By making the AI’s process clear, such as showing why a specific condition is flagged, XAI helps clinicians trust and verify diagnoses, making it valuable in sensitive, high-stakes fields.
A. ChatGPT is not fully explainable AI; it’s a language model focused on generating responses based on vast data without providing insight into how it arrived at specific answers. Explainable AI would require it to justify its responses, an area still under development for complex models like ChatGPT.
A. Explainable AI (XAI) adds a layer of transparency, allowing humans to understand and interpret an AI’s decision-making process. In contrast, traditional AI models may produce results without explaining their internal logic, making XAI crucial for applications requiring trust and accountability.
A. Explainable AI in face detection aims to clarify why certain faces are detected or recognized by an algorithm. For instance, XAI could identify specific facial features the AI used to verify a person, providing transparency and addressing concerns about bias or accuracy in surveillance applications.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.