Preventing Eye Blindness by Predicting Stages of Diabetic Retinopathy

yashaswi kakumanu Last Updated : 01 Aug, 2023
9 min read

Introduction

Diabetic Retinopathy is an eye condition that causes changes to the blood vessels in the retina. When left untreated, it leads to vision loss. So detecting stages of Diabetic Retinopathy is crucial for preventing eye blindness. This case study is about detecting eye blindness from the symptoms of diabetic retinopathy severity to prevent the person from getting eye blindness. Here data has been collected from rural areas by various trained clinical experts using fundus cameras (cameras that photograph the rear of an eye). These photos have been taken under various imaging conditions. In 2019 Kaggle conducted a competition (APTOS 2019 Blindness Detection) to detect stages of diabetic retinopathy; our data has been taken from the same Kaggle competition. Early detection of this Diabetic Retinopathy can help fast-track the treatment and significantly reduce the risk of vision loss.

Diabetic Retinopathy | Eye blindness

Manual intervention of trained clinical experts requires time and effort, especially in underdeveloped countries. Hence, the main aim of this case study is to use efficient technology to detect the severity of the condition to prevent blindness. We implement deep learning techniques to obtain effective results for classifying the severity of the condition.

Learning Objectives

  • Understanding Diabetic Retinopathy: Learn about the eye condition and its impact on vision, emphasizing the importance of early detection.
  • Deep Learning Fundamentals: Explore the basics of deep learning and its relevance in diagnosing Diabetic Retinopathy.
  • Data Preprocessing and Augmentation: Understand how to effectively prepare and enhance the dataset for training deep learning models.
  • Model Selection and Evaluation: Learn to choose and assess the performance of deep learning models for severity classification.
  • Practical Deployment: Discover the deployment of the best model using Flask for real-world predictions.

This article was published as a part of the Data Science Blogathon.

Business Problem

Here person’s severity of the condition is classified into five categories, i.e., a multi-class classification as the person can be recognized with only one of the severity levels.

Business Constraints

Accuracy and Interpretability are highly essential in the case of the medical field. Because wrong predictions direct the way of ignorance which may take away a person’s life, we don’t have any strict latency concerns, but we must be accurate about the results.

Data Set Description

The data set includes 3,662 labeled retina images of clinical patients for which trained clinician experts categorize each image in terms of severity of diabetic retinopathy as below.

0 — No Diabetic Retinopathy,

1 — Mild

2 — Moderate

3 — Severe

4 — Proliferative Diabetic Retinopathy.

Diabetic Retinopathy | Eye blindness

The above table indicates our dataset images identified with one of the stages of diabetic retinopathy.

Performance Metric

We are taking Quadratic weighted Kappa and Confusion matrix for our multi-class classification as Evaluation metrics.

Kappa measures the agreement(similarity) between the actual and predicted labels. A Kappa score of 1.0 indicates both the predictions and actual labels are the same, kappa score of -1 indicates the predictions are far away from the actual labels. Our aim for this metric is to get a kappa score of more than 0.6

The Kappa metric plays a crucial role in medical diagnosis because, in the case of a -1 score, it indicates how similar the two raters (predicted and actual raters) are and imposes a penalty on the disagreement. For example, in our case, if our predicted label is 0 and the actual label is 1, then it poses a serious problem for the patient as this case will be ignored because no further diagnosis is recommended for the patient.

As we know, the Confusion matrix is used for evaluating the performance of a classification model, The matrix compares the actual target values with those predicted by our model. This gives us a holistic view of how well our classification model performs and what kinds of errors it makes.

Exploratory Data Analysis and Preprocessing

Let us check the distribution of our data by plotting the bar graph.

v=df['diagnosis'].value_counts().plot(kind='bar')
EDA and preprocessing | Diabetic Retinopathy | Eye blindness

From the above plot, we can deduce that our data is clearly imbalanced. We need to make sure to balance our data to prevent from getting inaccuracy results.

We can apply class weights to maintain uniformity in the data set to obtain uniform distribution in the data.

Our training dataset contains only 3,662 images, so the Dataset provided by Kaggle is very small. It is desirable to overfit because of training on the small data set, so here preprocessing plays a crucial role in better performance by increasing our datasets. So we head for the data augmentation to improve our datasets. Before data augmentation, we need to check for the image conditions, as the data has been collected from various sources, like whether the images are very dark, have an extra black background, and are images of various image sizes. Hence, we need to apply smoothing techniques to the images to maintain uniformity in the image quality by cropping extra black backgrounds, resizing the images to a typical image size, etc.

 Source: Author

From above, we can observe our data set contains images of different sizes with horizontal cropping, vertical cropping, and extra black regions.

We apply below smoothing techniques to obtain all the images in uniform quality.

Crop Function

→ Crop function-use it for removing extra dark parts around the image

def crop(img,tol=7):
 # here tol is tolerance  
  '''
  this crop function is used for removing darkparts arounds the image
  '''

  if img.ndim==2:
# this loop is used for cropping GRAY images
    mask=img>tol
    return img[np.ix_(mask.any(1),mask.any(0))]
  elif img.ndim==3:
# this loop is used for cropping color images    
    grayimg=cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
    mask=grayimg>tol
    shap=img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
    if shap==0:
# image is too dark so that we crop out everything   
      return img
    else:
      img0=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
      img1=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
      img2=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
      img=np.stack([img0,img1,img2],axis=-1)
    return img #import csv

The below figure indicates that after applying the function, we obtain images by cropping dark parts around it.

 Source: Author

→ Circle Crop function crops an image circularly by taking reference from the center.

def circlecrop(img):
  '''
  used for cropping image in a circular way from image centre
  '''

  h,w,d= img.shape
  x = int(w/2)
  y = int(h/2)
  r = np.amin((x,y))
  circle_img = np.zeros((h,w), np.uint8)
  cv2.circle(circle_img, (x,y), int(r), 1, thickness=-1)
  img = cv2.bitwise_and(img, img, mask=circle_img) 
  return img

The below figure is obtained after applying the circular crop function

 Source: Author

Ben Graham’s Function

→Ben Graham’s function- This function is applied to improve image brightness

def ben(img,sigmaX=10):
  '''
  Ben Graham's method to improve lighting condition.

  '''
  image=cv2.addWeighted( img,4, cv2.GaussianBlur( img , (0,0) , sigmaX) ,-4 ,128)
  return image

The below figure indicates after applying Ben Graham’s function, we improved the lighting condition of the image.

 Source: Author

Let’s check the preprocessed retina images after applying the above functions.

retina images | Diabetic Retinopathy | Eye blindness

As our dataset is very small, we may lead to overfitting. To overcome this scenario, we need to increase our training data. Increase the data using augmentation techniques like horizontal flipping, Vertical flipping, rotating images, zooming, and setting brightness.

from keras_preprocessing.image import ImageDataGenerator

datagen=ImageDataGenerator(horizontal_flip=True,vertical_flip=True,rotation_range=360,
                           brightness_range=[0.5, 1],
                           zoom_range = 0.2,rescale=1./255.,validation_split=0.25)
validation_datagen = ImageDataGenerator(rescale = 1./255)

train_generator=datagen.flow_from_dataframe(
dataframe=df,
directory="prep",
x_col="add",
y_col="diagnosis",
subset="training",
batch_size=12,
seed=42,
shuffle=True,
class_mode="categorical",
target_size=(256, 256))
     

As we have used validation split as 0.25, we get 2,747 train and 915 validation images.

Here each image is replicated five times as we have used five techniques horizontal flip, vertical flip, rotation range, brightness range, and zoom range.

Deep Learning Model

First, we construct a baseline model with simple CNN architecture.

inp=Input(shape=(256,256,3))
x=Conv2D(32,(3,3),activation='relu')(inp)
x=MaxPooling2D(pool_size=(2, 2))(x)
x=Dropout(0.5)(x)
x=Flatten()(x)
x=BatchNormalization()(x)
x=Dense(5, activation='softmax')(x)

For the above model, we get a kappa score of 0.554, which is not acceptable for predicting the stage of the condition.

We head for transfer learning to use the pretrained model for achieving a high kappa score.

VGG-16

model_vgg16= VGG16(weights='imagenet', include_top=False,input_shape=(256,256, 3))
x=GlobalAveragePooling2D()(model_vgg16.layers[-1].output)
x=Dropout(0.5)(x)
x=Dense(5, activation='softmax')(x)

→Train Cohen Kappa score: 0.913

→Train Accuracy score: 0.817

From the above model, we get a kappa score of 0.913

DENSENET

modeldense=DenseNet121(weights='imagenet', include_top=False,input_shape=(256,256, 3))
x=GlobalAveragePooling2D()(modeldense.layers[-1].output)
x=Dropout(0.5)(x)
x=Dense(5, activation='softmax')(x)

→Train Cohen Kappa score: 0.933

→Train Accuracy score: 0.884

From the above model, we get a kappa score of 0.933

RESNET

modelres152=ResNet152(weights='imagenet', include_top=False,input_shape=(256,256, 3))
x=GlobalAveragePooling2D()(modelres152.layers[-1].output)
x=Dropout(0.5)(x)
x=Dense(5, activation='softmax')(x)

→Train Cohen Kappa score: 0.910

→Train Accuracy score: 0.844

From the above model, we get a kappa score of 0.91

EFFICIENTNET

After implementing various efficient models like EEfficientNetB0, B3, B4, and B7, we can generate better results using EfficientNetB7.

modeleffB7=EfficientNetB7(weights='imagenet', include_top=False,input_shape=(256,256, 3))
x=GlobalAveragePooling2D()(modeleffB7.layers[-1].output)
x=Dropout(0.5)(x)
x=Flatten()(x)
x=Dense(5, activation='softmax')(x)

→Train Cohen Kappa score: 0.877

→Train Accuracy score: 0.838

From the above model, we get a kappa score of 0.877

XCEPTION

modelxcep=Xception(weights='imagenet', include_top=False,input_shape=(256,256, 3))
x=GlobalAveragePooling2D()(modelxcep.layers[-1].output)
x=Dropout(0.5)(x)
x=Flatten()(x)
x=Dense(5, activation='softmax')(x)

→Train Cohen Kappa score: 0.925

→Train Accuracy score: 0.854

From the above model, we get a kappa score of 0.925

 Source: Author

From the above model, we observe Denset model obtains a better Kappa score. So we choose our Denset model as the best and head for stage prediction.

Prediction Using Our Best Model

Predicting the stages using our best model:

X='/content/00a8624548a9.png'
img = cv2.imread(X)
img=crop(img)
img = cv2.resize(img,(256,256),interpolation=cv2.INTER_AREA)
img=circlecrop(img)
img=ben(img)
img = np.reshape(img,[1,256,256,3])
cd = ImageDataGenerator(horizontal_flip=True,vertical_flip=True,
                          rotation_range=360,brightness_range=[0.5, 1],
                           zoom_range = 0.2,rescale=1./255)
cg = cd.flow(img,batch_size=1)
tp = model.predict(cg)
op=np.argmax(tp)
if op==0:
  matter="Stage 0 - No Diabetic Retinopathy"
elif op==1:
  matter="Stage 1 - Mild"
elif op==2:
  matter="Stage 2 - Moderate"
elif op==3:
  matter="Stage 3 - Severe"
elif op==4:
  matter="Stage 4 - Proliferative Diabetic Retinopathy"
print(matter)

From the above image, we can observe that our best model predicts the stages of diabetic retinopathy.

Deployment of Our Model Using Flask

I have used Flask for deploying my model so that we can predict the stage of color blindness of our uploaded retina image. Below is the video of running instances of my deployed model.

Conclusion

In conclusion, this blog has showcased the transformative power of deep learning in detecting Diabetic Retinopathy and preventing vision loss. With early detection and accurate severity classification, AI can significantly improve patient outcomes. The real-world applicability of these techniques through model deployment using Flask highlights their practicality in healthcare settings.

Continued research in augmentation techniques and model refinement will further enhance diagnostic capabilities. By harnessing the potential of AI, we can revolutionize medical diagnosis and pave the way for a healthier future.

  • Deep learning models, such as VGG-16, DENSENET, RESNET, EFFICIENTNET, and XCEPTION, have effectively classified the severity of Diabetic Retinopathy.
  • The best-performing model, DENSENET, achieved high Kappa scores, demonstrating its capability for accurate predictions.
  • Data preprocessing and augmentation are vital in enhancing the model’s performance and generalizability.
  • Flask deployment showcases the practical applicability of deep learning in real-world scenarios, facilitating efficient diagnosis and treatment.
  • Continued research in augmentation techniques and model refinement holds the potential for further improving diagnostic accuracy and advancing medical diagnosis using AI.

Future Work

  • We can implement more augmentation techniques.
  • We can try out various convolutional layers on our models.
  • Need to grab more retina images for training.

Frequently Asked Questions

Q1. What is Diabetic Retinopathy, and why is it essential to detect it early?

A1. Diabetic Retinopathy is an eye condition that happens by changes in the retina’s blood vessels. Early detection is vital as it allows timely intervention, preventing vision loss and blindness.

Q2. How does deep learning help in diagnosing Diabetic Retinopathy?

A2. Deep learning techniques, such as CNNs, analyze retina images to classify the severity of Diabetic Retinopathy accurately. They learn patterns and features from data, aiding in efficient diagnosis.

Q3. What key performance metrics are used to evaluate the deep learning models?

A3. The blog used Quadratic Weighted Kappa and Confusion Matrix as evaluation metrics. Quadratic Weighted Kappa measures agreement between predicted and actual labels, while the Confusion Matrix provides a holistic view of model performance.

Q4. What are the advantages of using deep learning models for Diabetic Retinopathy detection compared to traditional machine learning approaches?

A4. Deep learning models can automatically learn complex patterns and features from data, making them more effective in capturing intricate details in retina images. This enables deep learning models to outperform traditional machine learning approaches in accuracy and diagnostic capabilities for Diabetic Retinopathy detection.

Q5. What are some potential challenges or limitations when applying deep learning models to detect Diabetic Retinopathy?

A5. Some challenges and limitations may include the need for extensive and diverse datasets, potential overfitting with small datasets, interpretability of deep learning models, and the requirement for significant computational resources for training and inference. Addressing these challenges is essential for ensuring reliable and practical deep-learning applications in Diabetic Retinopathy diagnosis.

 References

  • https://github.com/btgraham/SparseConvNet/blob/kaggle_Diabetic_Retinopathy_competition/competitionreport.pdf
  • https://arxiv.org/abs/1905.11946
  • https://arxiv.org/abs/0704.1028
  • https://www.kaggle.com/xhlulu/aptos-2019-densenet-keras-starter
  • https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/108065

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Passionate Machine Learning Engineer with expertise in agile methodology and cloud environments. Proficient in designing,
developing, testing, and deploying applications utilizing cloud technologies. Actively contributes to open-source projects,
demonstrating a commitment to advancing machine learning through continuous learning and improvement.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details