top of page
Research Publications

Born with a silver spoon? Investigating Socioeconomic bias in large langugae  models

1200px-Association_for_Computational_Linguistics_logo.png

(Coming soon!)

From Prejudice to Parity: A new approach to debiasing large language model word embeddings

brand-logo-primary.jpg

Embeddings play a pivotal role in the efficacy of Large Language Models. They are the bedrock on which these models grasp contextual relationships and foster a more nuanced understanding of language and consequently perform remarkably on a plethora of complex tasks that require a fundamental understanding of human language. Given that these embeddings themselves often reflect or exhibit bias, it stands to reason that these models may also inadvertently learn this bias. In this work, we build on the seminal previous work and propose DeepSoftDebias, an algorithm that uses a neural network to perform 'soft debiasing'. We exhaustively evaluate this algorithm across a variety of SOTA datasets, accuracy metrics, and challenging NLP tasks. We find that DeepSoftDebias outperforms the current state-of-the-art methods at reducing bias across gender, race, and religion.

Language models (mostly) do not consider emotion triggers when predicting emotions

brand-logo-primary.jpg

"Female Astronauts, because sandwiches won't make themselves up there!"                                            Towards multimodal misogyny detection in memes

1200px-Association_for_Computational_Linguistics_logo.png

A rise in the circulation of memes has led to the spread of a new form of multimodal hateful content. Unfortunately, the degree of hate women receive on the internet is disproportionately skewed against them. This, combined with the fact that multimodal misogyny is more challenging to detect as opposed to traditional text-based misogyny, signifies that the task of identifying misogynistic memes online is one of utmost importance. To this end, the MAMI dataset was released, consisting of 12000 memes annotated for misogyny and four sub-classes of misogyny - shame, objectification, violence and stereotype. While this balanced dataset is widely cited, we find that the task itself remains largely unsolved. Thus, in our work, we1 investigate the performance of multiple models in an effort to analyse whether domain specific pretraining helps model performance. We also investigate why even state of the art models find this task so challenging, and whether domain-specific pretraining can help. Our results show that pretraining BERT on hateful memes and leveraging an attentionbased approach with ViT outperforms state of the art models by more than 10%. Further, we provide insight into why these models may be struggling with this task with an extensive qualitative analysis of random samples from the test set.

Situations and events evoke emotions in humans, but to what extent do they inform the prediction of emotion detection models? Prior work in emotion trigger or cause identification focused on training models to recognize events that trigger an emotion. Instead, this work investigates how well human-annotated emotion triggers correlate with features that models deemed salient in their prediction of emotions. First, we introduce a novel dataset EMOTRIGGER, consisting of 900 social media posts sourced from three different datasets; these were annotated by experts for emotion triggers with high agreement. Using EMOTRIGGER, we evaluate the ability of large language models (LLMs) to identify emotion triggers, and conduct a comparative analysis of the features considered important for these tasks between LLMs and fine-tuned models. Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.

BREE-HD: A transformer based model to identify threats on Twitter

AV-6noOI_400x400.jpg

With the world transitioning to an online reality and a surge in social media users, detecting online harassment and threats has become more pressing than ever. Gendered cyber-hate causes women significant social, psychological, reputational, economic, and political harm. To tackle this problem, we develop a dataset and propose a transformer-based model to classify tweets into threats or non-threats that are either sexist or non-sexist. We have developed a model to identify sexist and non-sexist threats from a collection of sexist, non-sexist tweets. BREE-HD performs extraordinarily well with an accuracy of 97% when trained on the dataset we developed to detect threats from a collection of derogatory tweets. To provide insight into how BREE-HD makes classifications, we apply explainable A.I. (XAI) concepts to provide a detailed qualitative analysis of our proposed methodology. As an extension of our work, BREE-HD could be used as a part of a system that could detect threats targeting people specifically tailored to classify them in real-time adequately.

"Hold on honey, men at work": A semi-supervised approach to detecting sexism in sitcoms

acl-logo.png

Television shows play an important role in propagating societal norms. Owing to the popularity of the situational comedy (sitcom) genre, it contributes significantly to the overall development of society. In an effort to analyze the content of television shows belonging to this genre, we present a dataset of dialogue turns from popular sitcoms annotated for the presence of sexist remarks. We train a text classification model to detect sexism using domain adaptive learning. We apply the model to our dataset to analyze the evolution of sexist content over the years. We propose a domain-specific semi-supervised architecture for the aforementioned detection of sexism. Through extensive experiments, we show that our model often yields better classification performance over generic deep learning based sentence classification that does not employ domain-specific training. A quantitative analysis along with a detailed error analysis presents the case for our proposed methodology. 

#Whydidyoustay? Using NLP to analyze causes of long lasting domestically abusive relationships

1200px-Association_for_Computational_Linguistics_logo.png

The Pandemic has caused an increase in domestic violence cases and has made it even more difficult for victims to leave. Given the number of resources available to help people stuck in domestically abusive relationships walk away, there is a huge gap between the number of resources existing and those actually availed. This project, still in its infancy, aims to answer the research question, how can we use NLP to examine the reasons behind long-lasting domestically abusive relationships? We aim to conduct an analysis based on multiple sources of data and leverage various modeling approaches to come up with an optimal answer to our research question.

Toward the early detection of child predators using deep learning

1200px-Association_for_Computational_Linguistics_logo.png

Due to the Pandemic, children have more time and unfettered access to electronic devices. This, combined with the desire to interact with new people, has given child predators more opportunities to seduce children. Our work aims to leverage the PAN12 dataset and deep learning methods such as BERT-based approaches and Bi-LSTMs to develop a model that can identify child predators within minutes of the conversation being initiated. 

Select Personal Projects

Polly - An AI driven platform for PCOS Awareness

Feb 2021 - June 2021

To provide young women around the world with the opportunity to assess their likelihood of having the Polycystic Ovarian Syndrome, I have developed a web application for PCOS that comprises of a conversational agent named Polly. Polly is a retrieval based chatbot that considers factors like menstrual cycle regularity, visible symptoms of excessive androgens, body mass index, period pains and family history to determine whether they are prone to having PCOS. This application is currently in the deployment phase and there is also an android application we have developed for the same cause. This project was a team effort and my responsibility involved developing and integrating Polly with the rest of the application. Polly is powered by the Google Dialogflow API.

Analyzing Sentiment with the IMDb Dataset

Feb 2021 - June 2021

Implemented the research paper published at IEEE CICN in 2020. This paper used the IMDb Dataset to examine how certain supervised machine learning algorithms could be leveraged to classify movie reviews as negative or positive. I also investigated whether additional supervised models could be used to outperform the existing classifiers in the original paper. All models were evaluated against five metrics and it was found that the support vector classifier has the best performance, with an accuracy of 90%. The frameworks used include Pandas, Numpy, Seaborn, Matplotlib and scikit learn.

Review Bay: A sentiment analysis platform

Dec 2019 - Feb 2020

For the first round of the Smart India Hackathon 2020, we developed a sentiment analysis platform as per the requirements of a problem statement given by ISRO. This platform is capable of aggregating sentiment, providing in depth product analytics and classifying product reviews. This platform was developed using Django, Flask, Tensorflow, Keras, Pandas, Numpy and Matplotlib. 

bottom of page