Assorting and Locating Varied Forms of Sexual Harassment

yashaswi kakumanu Last Updated : 17 Aug, 2023
10 min read

Introduction

Do you know the inevitable fact about the prevalence of sexual harassment is because of low reporting incidence? If victims don’t report the harassment they have experienced then how would authorities be able to guide people from getting harassed and how would there be a change in the offender’s behaviors? Assorting and locating Varied Forms of Sexual Harassment case study helps victims to express their experience in an anonymous manner and helps in categorizing various types of sexual harassment, victims have experienced so that it helps in fast evaluation of category for filing testimonials and this also helps in providing safety precautions by taking into account of the analysis from the already filed forums.

These safety precautions give heads to the individual by delivering prevalent locations with most types of sexual harassment filed in that region and behavior of offenders. In the future from the above predictions, individuals will benefit a lot as they provide insights and create awareness about the event’s circumstances.

sexual harassment | assorting | locating

Learning Objectives 

  • Predicting multi-label classification of various forms of harassment in society
  • Utilizing natural language processing techniques on the dataset
  • Iterating over traditional machine learning algorithms
  • Implementing convolutional neural networks
  • The blog discusses the application of these methods to address harassment-related issues

This article was published as a part of the Data Science Blogathon.

Business Problem

Here victims stories are categorized into three types of sexual harassment i.e., we convert into multi label classification as the victims can face one or more types of sexual harassment at the time.

Business Constraints

As my case study is a multi-label classification, a misclassification is no longer a hard wrong or right. A prediction containing a subset of the actual classes should be considered better than a prediction that contains none of them i.e. predicting two of the three labels correctly is better than predicting no labels at all. We don’t have any strict latency concerns. Interpretability is very important because it helps in finding why the story is classified as one of the type of harassment

Dataset Description

Data has been collected from safecity online forum and WIN World Survey (WWS) a market research and polling survey for collecting data of sexual harassment predominant countries. Dataset contains two features. Feature 1 — contains victims stories (Description) , Feature 2 contains Geolocation (Location) of the event taken place.

Our class label is multi label classification which contains three types of sexual harassments (Commenting, Ogling and Groping) victim has experienced.

dataset description of sexual harassment | assorting | locating

Performance Metric

For multi label classification predictions for an instance is a set of labels and therefore , our predictions can be fully correct , partially correct or fully incorrect. This makes evaluation of a multi label classifier more challenging than evaluation of a single label classifier. However for the evaluation of partial correctness we can use below metrics for evaluation.

Accuracy — Here accuracy for one instance is calculated as the proportion of the predicted correct labels to the total number(predicted and actual ) of labels. Overall accuracy can be obtained by the average across all instances.

These metrics can be computed on individual class labels and then averaged over all classes. This is termed as Macro Averaging. Alternatively, we can compute these metrics globally over all instances and all class labels. This is termed as Micro averaging.

We use Macro F1-score and Micro F1-score as metric for multi label classification.

Hamming Loss is used as metric for multi label classification , this metric computes the proportion of incorrectly predicted labels to the total number of labels.

Preprocessing

In order to obtain better insights we head for cleaning of our data (like removing symbols, punctuations, special characters etc.). When it comes to text data, cleaning or preprocessing is as important as model building.

Below are the preprocessing steps we need to perform:

  1. Lower casing
  2. Removal of digits
  3. Removal of punctuations
  4. Removal of special characters
  5. Removal of html tags
  6. Removal of stop words
  7. Expanding contractions
preprocessing

Exploratory Data Analysis

It is important to ensure that the data is ready for modelling work. Exploratory Data Analysis (EDA) ensures the readiness of the data for Machine Learning. In fact, EDA ensures that the data is more usable. Without a proper EDA, Machine Learning work suffers from accuracy issues and many times, the algorithms won’t work. EDA helps us to understand the data and get better insights. So we head for the EDA.

Checking Null Values in the Dataset

 df.isnull().sum()

 image.png

We add an extra feature to the data frame which calculates the number of words from a victim story. Plotting distribution plot by taking into account of word count column from our data frame.

plot distribution | sexual harassment | assorting | locating
  • From the above plot we can deduce that most of the victim prefer sharing their experiences within 100 words.

Geographical Plot

From our data frame we take into consideration of Location column and then calculate number of times for which victims have experienced harassment in a particular region. For plotting geographical plot we Construct data frame of countries(sexual harassment experienced region) and count of victims who have reported from that particular region.

graphical plot | sexual harassment | assorting | locating
graphical plot
  • From the above graph we can deduce that highest number of victims are experienced in Mexico region(brighter yellow region).

Bar Plot

Bar plots to check number of victim stories in each category.

bar plot
 image.png

We are creating a column ‘label’ as follows:

  • We label 1 when the person experiences only commenting harassment
  • We label 2 when the person experiences only ogling harassment
  • We label 3 when the person experiences only groping harassment
  • We label 4 when the person experiences only commenting and ogling harassment
  • We label 5 when the person experiences only ogling and groping harassment
  • We label 6 when the person experiences only commenting and groping harassment
  • We label 7 when the person doesn’t experience any harassment
  • We label 8 when the person experiences three types of harassment at the same type
 image.png

From above bar plot we can observe that Mexico women have experienced highest sexual harassments. We also need to get clear intuition of the words that are frequently occurred in each category. Below are the barplots of most common unigrams,bigrams and trigrams for each category.

Commenting Category

 image.png
 image.png
 image.png

Ogling Category

 image.png
 image.png
 image.png

Groping Category

 image.png
 image.png
 image.png

TSNE

Performing vectorization on our victim stories in order to perform dimensionality reduction for the easy visualization of harassment category.

 image.png

As we know TSNE is stochastic in nature so for multiple runs we get different visualizations , so I have run multiple perplexities and iterations in order to obtain above plot, this plot clearly indicates one class can be segregated from each other.

Word Cloud

We have also implemented word cloud for the visualization of frequent data in each category.

Commenting Category

 image.png
  • From above we can deduce that for comment sexual harassment type most of the offenders were boys for this type event has usually taken place at college, station, bus, school.

Ogling Category

 image.png
  • From above we can deduce that for ogling sexual harassment type most of the offenders were guys for this type event has usually taken place on the streets while the victims were walking, passing by, going to college.

Groping Category

 image.png
  • From above we can deduce that for groping sexual harassment type most of the offenders were man for this type event has usually taken place in public places like at bus, station while they were traveling where people are crowded.

Scatter Text

Using scatter text for visualizing unique terms and their frequency. Scatter text plot works on categorical data as a binary classifier so we are creating separating columns for each harassment type with categorical values.

 image.png

Scatter Text Plot for Commenting Category

sexual harassment | assorting | locating

From above figure we can deduce top commenting words and non commenting words. The top-right of the chart are the most-shared terms and the bottom-left are the least frequent of the most-shared terms.

Scatter Text Plot for Ogling Category

sexual harassment | assorting | locating

From above figure we can deduce top ogling words and non ogling words. The top-right of the chart are the most-shared terms and the bottom-left are the least frequent of the most-shared terms.

Scatter Text Plot for Groping Category

scatter text plot for groping category | sexual harassment | assorting | locating

From above figure we can deduce top groping words and non groping words. The top-right of the chart are the most-shared terms and the bottom-left are the least frequent of the most-shared terms.

Machine Learning Models

For training the model we did a basic train test split and tried various models.

machine learning models

We have performed various machine learning models using BOW, TFIDF , GLOVE 300 dimension and we have observed below values for respective metrics.

machine learning models - 2 | sexual harassment | assorting | locating

From the above we can deduce high Macro F1 score of 0.63 from Linear SVC using BOW vectorizer, moreover BOW and TFIDF vectorizer outperforms GLOVE vectorizers in each metric.

We also head for the implementation of deep learning models.

Deep Learning Models

CNN Model

We have built a convolutional neural network by passing Glove 300 Dimensions into the embedding layer.

CNN deep learning model | sexual harassment | assorting | locating

As we are working on multi label classification we pass our last layer into sigmoid activation and we implement binary cross entropy loss function.

CNN-LSTM Model

We have also built a convolutional neural network by passing Glove 300 Dimensions into the embedding layer and then also added LSTM layer for the CNN-LSTM model.

CNN - LSTM Model

Summary of Both DL Models

summary of DL Models | sexual harassment | assorting | locating

From the above metrics choosing CNN as best model.

Deployment of Model

I have created web app using Flask and deployed my best model. Below is the video of running instances of my deployed model.

Conclusion

In conclusion, this blog sheds light on the pressing issue of sexual harassment and emphasizes the low reporting incidence as a contributing factor to its prevalence. It highlights the importance of victims reporting their experiences to enable authorities to guide people and drive a change in offender behavior.

This blog also discusses the implementation of natural language processing techniques, traditional machine learning algorithms, and convolutional neural networks, with the CNN model, augmented by an LSTM layer, generating superior results. Through this work, the aim is to empower individuals, provide guidance, and promote societal change in tackling the pervasive issue of sexual harassment.

Key Takeaways

  • CNN model outperformed traditional machine learning algorithms in predicting multi-label classification of harassment forms.
  • The LSTM layer used in the CNN model resulted in a significant improvement in performance metrics.
  • The article highlights the superiority of the CNN model and the impact of the LSTM layer on its performance.

Future Work

  • We need to gather more data so that it helps us to improve values of our performance metrics on test data set.
  • We can try BERT embeddings and FastText word embeddings.
  • We can work on our custom model in order to obtain enhanced values on performance metrics by changing architecture.

You can find my complete code over here.

Frequently Asked Questions

Q1: What is the significance of utilizing NLP techniques in this context?

A: NLP plays a vital role in analyzing textual data and extracting insights. In the context of predicting multi-label harassment, NLP preprocesses victim stories by cleaning the data, removing symbols, digits, and stop words.

Q2: How do CNNs contribute to the prediction of multi-label classifications in this study?

A: CNNs excel at processing structured data like images or text represented as word embeddings. In this article, CNNs process GloVe word embeddings of victim stories, capturing key features and patterns. The model’s convolutional layers extract relevant information, while subsequent layers learn complex relationships for multi-label predictions. Integrating CNNs in this study boosts the classification model’s performance.

Q3: What is the role of the LSTM layer in the CNN model? How does it improve performance?

A: The LSTM (Long Short-Term Memory) layer is a type of recurrent neural network layer that can effectively model sequential data. In the CNN model, add the LSTM layer after the convolutional layers. It helps capture the contextual dependencies and long-term relationships within the victim stories. By incorporating the LSTM layer, the model gains the ability to understand the sequential nature of the text, resulting in improved performance metrics, such as higher accuracy and F1-scores.

Q4: Which vectorization techniques do the machine learning and deep learning models use?

A: The study utilized various vectorization techniques: Bag-of-Words (BOW), TF-IDF, and GloVe word embeddings. BOW and TF-IDF count word occurrences and determine importance based on frequency. GloVe word embeddings represent words as dense vectors, capturing semantic meaning.

Q5: What performance metrics do we use to evaluate the multi-label classification models?

A: Use the two performance metrics, Macro F1-score and Micro F1-score, to evaluate the multi-label classification models. These metrics consider the correctness of predictions for each label individually and average them either across all instances (Micro) or across all labels (Macro).

References

  • https://aclanthology.org/D18-1303.pdf
  • https://stackoverflow.com/questions/19790188/expanding-english-language-contractions-in-python/47091490#47091490
  • https://www.kdnuggets.com/2020/09/geographical-plots-python.html
  • https://analyticsindiamag.com/visualizing-sentiment-analysis-reports-using-scattertext-nlp-tool/

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Passionate Machine Learning Engineer with expertise in agile methodology and cloud environments. Proficient in designing,
developing, testing, and deploying applications utilizing cloud technologies. Actively contributes to open-source projects,
demonstrating a commitment to advancing machine learning through continuous learning and improvement.

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details