PROJECT 1
Unlocking the Impact of Coupons : A Case Study Across Gold, Silver, and Bronze Members[A/B Testing]

PROJECT 2
Unveiling Market Sentiments: Positive, Negative, or Neutral Sentiment Analysis

Key Takeaways for Gold Customers:1. Coupons did not increase total purchases for gold customers, as they are regular purchasers who tend to buy consistently with or without discounts.2.Customer satisfaction levels from Key Insight one showed that most gold customers are already satisfied with their purchasing experience.Recommendation: No need to offer coupons to gold customers, as they are likely to make purchases without any incentive.Recommendation for Business Benefit:
1.Coupons can be reallocated to other customer groups (those with lower satisfaction or lower purchase activity).
2.This approach can help attract new customers or drive more purchases from less engaged groups.3.The company can save money by avoiding unnecessary discounts for gold customers, ensuring a more efficient use of resources.Key Takeaway for Silver Customers:**Observed Difference: **The significant difference in total spending for silver customers is larger than for gold customers but still negative, indicating that discounts did not increase total purchases for this group either.Satisfaction Levels: However, Key Insight 1 revealed that silver customers have low satisfaction levels compared to gold customers.Recommendation: Despite the negative spending impact, it might be beneficial to continue offering coupons to silver customers as a way to improve satisfaction and potentially drive future engagement.Recommendation for Business Benefit:1.Targeting low-satisfaction customers with incentives like coupons can help boost their overall satisfaction, potentially leading to more loyalty and engagement over time.2.By focusing on improving the experience of less satisfied customers, the company could foster long-term customer retention in this group.Takeaway for Bronze Customers:
Observed Difference:
The significant positive difference in total spending for bronze customers indicates that coupons have a positive impact on their purchases.
Satisfaction Levels:Bronze customers have low satisfaction levels and low purchasing activity, making them ideal candidates for receiving discounts.Recommendation for Business Benefit:1.Offering coupons to bronze customers can increase their spending and help retain a less engaged segment.2.Targeting this group with coupons can improve satisfaction and drive longer-term customer loyalty.

PROJECT 4
Recommend System: Text Analysis with SVM and Content-Based Recommendation

Key Takeaways
Effective Text Preprocessing: Thorough text preprocessing steps, such as lowercasing, punctuation removal, and stopword elimination, played a vital role in cleaning and standardizing the dataset. This preparation improved the model's focus on meaningful words and enhanced classification accuracy.
Feature Extraction with TF-IDF: Using TF-IDF for feature extraction allowed the model to capture the relative importance of words across documents. Limiting the vocabulary to the top 5,000 features provided an efficient balance between information retention and model complexity.Model Selection and Tuning: Experimentation with various SVM kernels (linear, polynomial, RBF, and sigmoid) demonstrated that the polynomial kernel achieved the highest performance, with an accuracy of 88% on the test set. This result highlighted the importance of kernel choice for capturing complex patterns in textual data.Recommendation System with Cosine Similarity: Building a recommendation system using cosine similarity provided an intuitive way to measure content similarity. This approach enabled the system to recommend relevant documents based on the user’s input, enhancing user experience by making related content accessible.Scalability of the Classifier: The classifier’s success with diverse topics (e.g., technology, sports, and religion) demonstrated its adaptability to various content categories. The model's ability to handle such a broad range of topics makes it a strong candidate for scalable applications in content filtering and recommendation.Future Improvements: While the current system achieved solid performance, future work could explore advanced NLP techniques, such as word embeddings or deep learning models, to capture even more nuanced patterns in textual data.

Key TakeawaysCustom-Built Embeddings: Without relying on pretrained embeddings, I developed word representations from scratch. This approach allowed the model to adapt specifically to the unique language and context of financial text.Effective Handling of Unknown Words: By dynamically learning representations for unknown words and using random initialization when necessary, the model became resilient to new or rare financial terms—essential for real-world financial applications.Optimal Hyperparameter Configuration:Extensive tuning led to the ideal combination of vector size, window size, negative sampling, LSTM units, and dropout rate, achieving a peak accuracy of 0.7313 and maximizing model performance.Enhanced Financial Text Analysis: This custom-built model effectively captures sentiment in financial texts, making it a valuable tool for insights in finance, from market sentiment analysis to informed decision-making.Future Improvements:With everything built from scratch, the model provides a strong foundation for future enhancements. Potential next steps include experimenting with transformer models or incorporating additional financial datasets.

PROJECT 3
Smart Savings on the Go: Personalized In-Vehicle Coupon Recommendations
Key TakeawaysContextual Relevance:Features like destination type, weather, and companionship strongly influenced coupon acceptance rates.
1.The Naive Bayes model established a solid baseline with an accuracy of approximately 70% and a recall rate of 78.5%, showing potential for recommending coupons based on probabilistic interpretations of user features.
2.Logistic Regression provided insights into feature importance but showed limited accuracy.
3.Support Vector Machine (SVM) attempted non-linear decision boundaries, but its accuracy and sensitivity to scaling highlighted its challenges with categorical data and the need for models that handle non-linearities better.
Neural Network Superiority:The neural network model, designed with multiple layers and dropout regularization, achieved similar accuracy (~70%) to Naive Bayes but with greater adaptability to complex, non-linear patterns in user behavior. Precision of 68-70% and recall of 81% reinforced its effectiveness for this recommendation task.Feature Engineering Impact: Customizing features, such as time-specific recommendations (e.g., morning coffee coupons), improved model focus and effectiveness, suggesting that tailored recommendations based on user preferences and context could enhance coupon redemption rates.Future Directions: The study highlights potential for refining model performance through further feature selection, model tuning, and possibly integrating real-time contextual data to improve prediction accuracy and relevance of recommendations.

PROJECT 5
Understanding Customer Intent: A Journey Through Banking Conversations

Key Takeaways
Data-Driven Design with EDA: Exploratory data analysis revealed key patterns, such as the brevity of queries and frequently occurring bigrams. These insights informed the model's design, helping align it with typical customer behaviors in banking.
Targeted Preprocessing: Customized text preprocessing steps, including the retention of punctuation and stop words, preserved essential context in banking queries. This approach allowed the model to interpret subtle nuances in customer language, which is crucial for accurate intent classification.
Word Embeddings for Enhanced Understanding: Using Word2Vec embeddings provided dense vector representations of words, enriching the model’s understanding of banking-specific terms and relationships—valuable for recognizing customer intents.
Effective BiLSTM Model:The bidirectional LSTM (BiLSTM) architecture enabled the model to understand each query in its full context, improving classification accuracy. Processing queries both forwards and backwards helped capture nuanced relationships in customer intents.Strong and Reliable Performance: The final model achieved 85% accuracy with minimal overfitting, as shown by closely aligned training and validation metrics. This performance validates the model's robustness and its suitability for real-world banking applications.


STORY TELLING

Project 1
Unlocking the Impact of Coupons : A Case Study Across Gold, Silver, and Bronze Members
INTRODUCTION

In the dynamic world of e-commerce, coupons are a powerful tool to drive sales and enhance customer loyalty. However, without a personalized approach, these coupons risk being wasted on customers who may not need them or failing to engage those who would benefit the most. This case study examines how targeted coupon strategies can be tailored to specific customer segments—gold, silver, and bronze. By focusing on customers with lower satisfaction or purchase rates, while minimizing offers to those who are likely to buy regardless, the aim is to ensure coupons are used effectively to both increase sales and improve customer satisfaction.

Key Insight 1: Customer Satisfaction Levels Across Gold, Silver, and Bronze MembersCustomer satisfaction is a key indicator of how well coupon strategies are performing. In this analysis, I examined how satisfaction varied across the gold, silver, and bronze customer segments.Gold members are predominantly satisfied, suggesting that coupons might not be as essential for retaining their loyalty. Silver members show a more balanced mix of satisfaction levels, indicating room for targeted coupon strategies to improve engagement. Meanwhile, bronze members, with a wider spread of satisfaction levels, especially among the unsatisfied, are prime candidates for personalized offers to boost their experience and drive more purchases.While this is a great start, it’s not enough to come to a full conclusion just yet—like trying to solve a mystery with only half the clues. So, let’s explore further! After all, we don't want to give the gold members a coupon when they’d already buy without it, right?

I conducted a permutation test to assess the impact of applying discounts on total spending for gold customers. Customers were divided into a control group (no discount applied) and a treatment group (discount applied). The test aimed to determine if there was a statistically significant difference in total spending between the two groups.Null Hypothesis (H0):
H0: There is no significant difference in total spending between the treatment group (discount applied) and the control group (no discount applied).
Alternative Hypothesis (H1):
H1: There is a significant difference in total spending between the treatment group (discount applied) and the control group (no discount applied).

Key Insight 2: Permutation Test for Gold Customers
I conducted a permutation test comparing gold customers in a control group (no discount) and a treatment group (with discount). The observed difference in total spending was −310.98, with a p-value of 0.0, indicating a significant difference.
The 95% confidence interval for the difference in means was (-39.13, 39.29), suggesting some uncertainty, but the negative result indicates that discounts are not effective for this group. Cliff’s Delta of 1.0 shows a strong effect, implying that gold customers with discounts spent less.

Coupons did not increase total purchases for gold customers, as they are regular purchasers who tend to buy consistently with or without discounts.Customer satisfaction levels from Key Insight 1 showed that most gold customers are already satisfied with their purchasing experience.Recommendation: No need to offer coupons to gold customers, as they are likely to make purchases without any incentive.Recommendation for Business Benefit:Coupons can be reallocated to other customer groups (those with lower satisfaction or lower purchase activity).This approach can help attract new customers or drive more purchases from less engaged groups.The company can save money by avoiding unnecessary discounts for gold customers, ensuring a more efficient use of resources.

Key Insight 3: Permutation Test for Silver Customers
For silver customers, I performed a permutation test comparing the control group (no discount) and the treatment group (with discount). The observed difference in total spending was −115.10, with a p-value of 0.0, indicating a significant difference.
The 95% confidence interval for the difference in means was (-21.96, 22.20), and the Cliff's Delta was 1.0, showing a strong negative effect. Similar to the gold customers, silver customers with discounts spent less than those without, suggesting that discounts may be reducing total purchases for this group as well.Key Takeaway for Silver Customers:1.Observed Difference: The significant difference in total spending for silver customers is larger than for gold customers but still negative, indicating that discounts did not increase total purchases for this group either.2.Satisfaction Levels: However, Key Insight 1 revealed that silver customers have low satisfaction levels compared to gold customers.Recommendation:Despite the negative spending impact, it might be beneficial to continue offering coupons to silver customers as a way to improve satisfaction and potentially drive future engagement.Recommendation for Business Benefit:Targeting low-satisfaction customers with incentives like coupons can help boost their overall satisfaction, potentially leading to more loyalty and engagement over time.By focusing on improving the experience of less satisfied customers, the company could foster long-term customer retention in this group.
Key Insight 4: Permutation Test for Bronze Customers
For bronze customers, I performed a permutation test comparing the control group (no discount) and the treatment group (with discount). The observed difference in total spending was 52.23, with a p-value of 0.0, indicating a significant difference in favor of customers who received discounts.
The 95% confidence interval for the difference in means was (-11.42, 11.53), and the Cliff's Delta was -0.993, indicating a strong effect. This means bronze customers with discounts spent significantly more than those without discounts, highlighting that coupons are effective for this customer segment
Takeaway for Bronze Customers:
1.Observed Difference: The significant positive difference in total spending for bronze customers indicates that coupons have a positive impact on their purchases.
2.Satisfaction Levels: Bronze customers have low satisfaction levels and low purchasing activity, making them ideal candidates for receiving discounts.3.Recommendation: Coupons are essential for bronze customers, as they not only boost total purchases but could also help improve customer satisfaction and engagement.Recommendation for Business Benefit:Offering coupons to bronze customers can increase their spending and help retain a less engaged segment.Targeting this group with coupons can improve satisfaction and drive longer-term customer loyalty.Data Limitation and Challenge:
1.Performing an exhaustive permutation test was not feasible due to a lack of computational resources, as it requires significant processing power to calculate all possible combinations.
2.Additionally, a t-test was not suitable because the purchasing data wasn't normally distributed, limiting the statistical options available for analysis.3.The dataset was also limited to 350 records, but we still tried to derive as many insights as possible from the available data.

Project 2
Unveiling Market Sentiments: Positive, Negative, or Neutral Sentiment Analysis
INTRODUCTION

Understanding whether the tone of financial documents such as news articles, reports is positive, negative, or neutral provides key insights that can impact market behavior, investment decisions and drive strategic actionsThis project aims to classify sentiment in financial texts using a model built from scratch, with custom embeddings tailored to capture the unique nuances of financial language. This approach equips the model to interpret complex financial sentiments, offering a powerful tool for market analysis.

Exploring Sentence Lengths: My First Step
Analyzing sentence lengths helped me determine the appropriate cutoff for padding sentences in the dataset. The box plot indicated that the majority of sentences were comfortably within the 20 to 40-word range, with a median at 30 words.
Action Taken: Based on this data, I chose to pad all sentences to a fixed length close to the upper quartile to preserve most of the textual information without introducing too much noise from the longer outliers.

Refining Baseline Models: Impact of Stopwords in Financial ContextModel Experimentation: I tested two baseline models with different text preprocessing strategies using CBOW for Word2Vec embeddings to determine their impact on sentiment analysis:1.Model with Stopwords: This version included stopwords, under the hypothesis that in financial contexts, common words might carry significant sentiment cues.2.Model without Stopwords: This model excluded stopwords to concentrate on the main lexical content.Preprocessing Insights: Lemmatization was trialed but showed no beneficial effects on the results and was therefore omitted from further processing.Model Architecture: Both models employed LSTM networks to capture the sequential nature of text data. I integrated early stopping with a patience of three epochs to optimize training and avoid overfitting.Key Findings: Interestingly, the model that included stopwords demonstrated improved performance. This suggests that in financial texts, stopwords can hold contextual significance that enhances sentiment analysis accuracy. This outcome guided my subsequent adjustments and optimizations to the sentiment analysis model.

Visualizing Language: Uncovering Financial Semantics
Financial Terms: The model clusters essential terms like "profit," "sales," "million," and "quarter," highlighting its adeptness at capturing financial language crucial for document analysis.
Geographical Context: It accurately associates "Finnish," "group," and "Finland," useful for regional financial analysis.Temporal Understanding: The proximity of "year" and "quarter" to financial terms demonstrates the model's ability to comprehend temporal elements.

Improving the Baseline Model: Handling Unknown Words and Hyperparameter Tuning
Unknown Word Handling:
My approach included two strategies:
Learning from Context: The model dynamically learns embeddings during training, utilizing the contextual information available within financial documents to infer meaning.
Random Initialization: For isolated or rare words, where contextual clues are sparse, I employed random initialization.

Optimal Configuration Achieved:
The combination of parameters — vector size of 300, window size of 25, negative sampling rate of 20, 256 LSTM units, and a dropout rate of 0.4 — demonstrated the best performance across all tested configurations. This optimal setup achieved a peak accuracy of 0.7313, outperforming all other parameter combinations in our hyperparameter tuning process.

Key Takeaways
1,Custom-Built Embeddings: Without relying on pretrained embeddings, I developed word representations from scratch. This approach allowed the model to adapt specifically to the unique language and context of financial text.
2.Effective Handling of Unknown Words:By dynamically learning representations for unknown words and using random initialization when necessary, the model became resilient to new or rare financial terms—essential for real-world financial applications.3.Optimal Hyperparameter Configuration: Extensive tuning led to the ideal combination of vector size, window size, negative sampling, LSTM units, and dropout rate, achieving a peak accuracy of 0.7313 and maximizing model performance.4.Enhanced Financial Text Analysis: This custom-built model effectively captures sentiment in financial texts, making it a valuable tool for insights in finance, from market sentiment analysis to informed decision-making.5.Future Improvements: With everything built from scratch, the model provides a strong foundation for future enhancements. Potential next steps include experimenting with transformer models or incorporating additional financial datasets.

Project 3
Smart Savings on the Go: Personalized In-Vehicle Coupon Recommendations
INTRODUCTION

This project explores the potential of a data-driven, personalized coupon recommendation system that utilizes in-vehicle data to deliver relevant offers to users during their journeys. By analyzing user preferences, past purchases, and real-time location data, this recommendation system aspires to deliver timely discounts and promotions that align with users’ current contexts and needs. Our goal is to increase user engagement, boost customer satisfaction, and drive sales, all while ensuring that recommendations feel organic rather than intrusive.

My journey through Exploratory Data Analysis:Trips with “No Urgent Place” as the destination showed a higher acceptance rate for coupons. This insight suggests that passengers may be more receptive to offers when they’re in a relaxed setting, perhaps open to spontaneous stops.When traveling alone, individuals accepted coupons more frequently than those accompanied by friends or family.Sunny days correlated with higher coupon acceptance rates, particularly for coffee and takeout options.I noticed that coupon acceptance varied depending on the time of day, hinting that time-specific recommendations like a morning coffee coupon could enhance the likelihood of redemption.
Feature Engineering Summary
To build an effective in-vehicle recommendation system, I transformed the raw data into features that would directly support coupon prediction. I began by simplifying the distance metrics into a single feature to avoid redundancy and streamline the model’s focus. Next, I captured user preferences by consolidating visit frequencies based on the coupon type in each instance, creating a frequencytocoupon_loc feature. Finally, by encoding categorical variables into ordinal values, I ensured that the data was ready for model building without losing the unique behavioral signals essential to personalized recommendations.

Establishing the Foundation: Naive Bayes as the Baseline Model
The Naive Bayes model is like a detective piecing together clues: it examines each piece of information—destination, weather, travel companions—treating each as an independent hint about whether a user will accept a coupon. By looking at each feature separately, Naive Bayes quickly forms a probabilistic picture of the user’s likely response.
The Process of Piecing Together ProbabilitiesStarting with the training data, I divided it carefully, reserving 70% for training and setting aside 30% for testing. This split ensured that I could train the model on a broad range of scenarios while preserving some surprises for the final evaluation. For each feature, I calculated prior probabilities to establish a baseline likelihood for each outcome: would a user accept or decline the coupon? Then, I calculated likelihoods—the probability of each feature value given the outcome. For instance, how often did sunny weather coincide with coupon acceptance? What about solo travelers?With these probabilities in hand, the model was ready to make predictions. For each new test instance, Naive Bayes combined all the clues (feature values) and calculated a posterior probability for each possible outcome, selecting the one with the highest likelihood as the final prediction.Key Takeaways
1.The Naive Bayes model achieved an accuracy of around 70%, a strong start for a baseline. More importantly, its recall was approximately 78.5%.
2.The ROC curve further validated this performance, with an AUC of 0.69, confirming that Naive Bayes could outperform random guessing and provide meaningful predictions.

Exploring Alternative Approaches: Logistic Regression and SVM

As part of my journey to build an effective recommendation system, I experimented with Logistic Regression and Support Vector Machine (SVM) models.
Logistic Regression served as a linear model that could reveal the influence of each feature on coupon acceptance. To account for class imbalance, I used SMOTE to boost representation of the minority class in the training set, ensuring the model had a balanced view of both outcomes. I also tuned key hyperparameters to optimize its learning rate and convergence. Logistic Regression, while interpretable, showed limited effectiveness with an accuracy of about 61% and struggled with complex patterns in the data.
Soft Margin SVM was an attempt to explore a non-linear decision boundary for this classification task. Given the computational expense, I applied Stratified Sampling to work with a subset of the data and used ANOVA F-Value for feature selection, choosing the most relevant features. While SVM’s theoretical strength lies in separating classes with an optimal margin, it didn’t perform well on this categorical dataset, achieving an accuracy of around 42%. The categorical nature of the data, along with SVM’s sensitivity to scaling, made it less suitable for this problem, but it highlighted the need for models that handle non-linearities more effectively.Neural Network:Capturing Complex Patterns in User Behavior.
After establishing a baseline with Naive Bayes, I introduced a Neural Network to capture the complex, non-linear relationships in user behavior, improving the recommendation system's predictive accuracy.
The Neural Network consisted of:1.An input layer representing all features, including contextual data like destination, weather, and time.2.Two hidden layers with 128 and 64 neurons, using ReLU activation to detect intricate patterns and interactions between features.3.A sigmoid output layer that produced a probability score for coupon acceptance, ideal for binary classification.To ensure the model generalized well, I incorporated dropout layers (20% rate) to prevent overfitting by disabling random neurons during training and used batch normalization to stabilize learning and improve convergence.Key Takeaways
1.Accuracy of approximately 70%, comparable to Naive Bayes but with enhanced adaptability.
2.Precision of around 68-70%, indicating relevant and focused recommendations.
3.Recall of 81%, making it highly effective at capturing cases of coupon acceptance—a critical feature for recommendation systems.

Project 4
Recommend System: Text Analysis with SVM and Content-Based Recommendation
Introduction

Imagine a platform that not only classifies text by topic but also recommends related content based on the specific themes a user has already explored. This project, "Recommend System: Text Analysis with SVM and Content-Based Recommendation," does just that. By combining the power of Support Vector Machines (SVM) for classification and a content-based recommendation system, we aim to create a system that both categorizes and recommends content effectively.

Building a Smart Text Classifier: Preprocessing, TF-IDF, and SVMFiltering relevant content from massive datasets can be a game-changer. To tackle this challenge, I set out to build a classifier capable of understanding and categorizing text from the 20 Newsgroups dataset, a classic collection of documents covering a diverse set of topics like technology, sports, religion, and more. Our goal? Achieve an accurate model that could categorize any given text into one of 20 categories.Preprocessing - Making the Text Ready for the ModelRaw text data is often noisy and unstructured, so we set out to clean and standardize it. Here’s how we prepared the documents:1.Lowercasing: To ensure consistency, I converted all text to lowercase.
2.Removing Punctuation and Special Characters:Next, I removed punctuation and symbols, focusing purely on the words themselves.
3.Tokenization and Stopword Removal: I split each document into individual words (tokens) and removed common words like "the," "is," and "in" using NLTK’s stopword list. This step reduced the dataset’s dimensionality, leaving behind only meaningful words that might actually help distinguish between categories.
4.Experimenting with Lemmatization
Initially, I thought lemmatization would help improve the model’s accuracy. By reducing "running" to "run" and "better" to "good," lemmatization can often make text more consistent and improve model performance. However, in this case, adding lemmatization didn’t yield significant improvements. This was likely because the dataset already contained specific terminology, and reducing words to their roots wasn’t as impactful as expected.
5.Feature Extraction with TF-IDF
With the text cleaned and ready, I moved on to feature extraction. To represent text numerically, I used TF-IDF (Term Frequency-Inverse Document Frequency), which captures the importance of words in each document while down-weighting common words across all documents. I limited the vocabulary to the top 5,000 features to keep the model efficient while still capturing the essential information.

Training the Model with SVM
To classify the documents, I experimented with different SVM kernels—linear, polynomial, RBF (Radial Basis Function), and sigmoid—to determine which would yield the best performance. After testing each, the polynomial kernel emerged as the top performer, achieving an impressive 88% accuracy on the test set. This result demonstrated that the polynomial kernel was particularly effective in capturing the nuances and patterns within the dataset, outperforming other kernels in distinguishing between the 20 categories.

Adding a Personalized Touch: Content-Based RecommendationsAfter achieving a solid classification accuracy of 88% with the SVM model, I wanted to take this project a step further. Beyond just categorizing documents, I wanted to help users discover related content based on their interests. This led to building a content-based recommendation system that would suggest documents similar to a user’s input.Implementing the Recommendation SystemTo make recommendations, I designed a function that could take any text input and return the most relevant documents from the dataset. Here’s how it worked:Transforming the Input Text: First, I used the same TF-IDF vectorizer that had been trained on the document dataset to convert the input text into a numerical vector.
Calculating Similarity: With the input vector ready, I calculated the cosine similarity between the input and each document in the dataset. Cosine similarity helped identify documents with similar content by measuring the angle between their vectors—a smaller angle meant more similarity.
Retrieving the Most Relevant Documents: Once I had similarity scores, I sorted them in descending order and selected the top documents as recommendations. This approach allowed the system to recommend documents that were the closest match to the user’s input, whether they were looking for "Mac hardware issues" or "sports updates."
Presenting the Recommendations:
For each recommended document, I displayed the category and a snippet of the content to give users a quick preview. The goal was to provide a seamless experience where users could explore related topics without needing to specify exact keywords.
Results from Recommendation System

PROJECT 5
Understanding Customer Intent: A Journey Through Banking Conversations
INTRODUCTION

In the fast-paced world of banking, each customer query—whether it’s checking a balance, applying for a loan, or reporting a lost card—reveals a distinct intent. Yet, as the volume of customer interactions grows, deciphering these intents swiftly becomes challenging. To address this, I set out to create an intent classification model that could act as a bridge between customer needs and efficient service. By transforming vast amounts of unstructured text data into actionable insights, my model identifies patterns in language, categorizes intents, and ultimately enables banks to deliver more personalized and timely support. This journey through natural language processing in banking brings customer service one step closer to understanding and anticipating customer needs.

Exploring Query Structure and Patterns
Next, I zoomed in on the structure of these queries by analyzing their lengths. How long are typical questions? The histogram of query lengths painted a clear picture: most queries are strikingly concise, with the majority falling within the 5-10 word range. This suggests that customers approach their banking needs with clarity and directness, expecting quick, straightforward answers. The occasional longer queries, stretching up to 80 words, likely represent more complex issues requiring detailed responses. This insight into query structure highlights the importance of optimizing language models to handle short, focused inputs while accommodating the rare, more elaborate inquiries—an essential step for building an effective intent classifier.
Curious about patterns within the text, I then explored bigrams—two-word combinations that frequently appear together. By extracting and counting the most common bigrams, I uncovered phrases that offer deeper insight into customer behavior. Pairs like “credit card,” “account balance,” and “transfer money” highlight the specific actions and inquiries that drive customer interactions. This focus on bigrams not only adds context to individual words but also hints at the intentions behind customer questions.Preprocessing for Effective Text Classification
To prepare the banking queries for classification, I embarked on a thoughtful text preprocessing journey. Knowing that each detail matters, I began with tokenization, breaking down the text into individual words, or tokens. This step transformed raw text into a format that could be fed into the model, setting a strong foundation for further processing.
Punctuation and stop words were retained intentionally—an unconventional choice in many NLP tasks but a strategic decision here. In banking queries, small words like “is” or punctuation can provide essential context. By preserving these elements, I aimed to capture subtle distinctions in customer language, allowing the model to better understand specific banking intents.

Lemmatization, or reducing words to their base forms, was skipped in this project. I realized that in banking, the specific forms of words carry unique meanings; “transferring” and “transfer,” for example, might signal slightly different intents. Retaining the original word forms allowed the model to recognize these nuances, enhancing its ability to accurately classify intents.Handling out-of-vocabulary (OOV) words was another critical consideration. To ensure the model wasn’t thrown off by unfamiliar terms, I introduced a placeholder token, <UNK>, to represent unknown words. This helped the model focus on known vocabulary while still capturing the essence of unfamiliar phrases through surrounding context. I also expanded contractions, such as turning “can’t” into “cannot,” which reduced the likelihood of missing out on critical intent signals hidden within shortened phrases.Finally, I created word embeddings using Word2Vec, employing the Continuous Bag of Words (CBOW) model to map each word into a dense vector. These embeddings enriched the model’s understanding of word relationships, especially in banking-specific terminology where context is key. To avoid potential biases from sequentially ordered data, I shuffled the batch data during training, ensuring the model learned underlying patterns rather than memorizing any order.

Building an Intent Classifier with LSTM: A Model Designed for Banking ConversationsTo tackle the task of understanding customer intents within banking queries, I needed a model architecture that could handle sequential data effectively. Given the context-heavy nature of language, especially in banking, I chose a Long Short-Term Memory (LSTM) network—a type of recurrent neural network well-suited for retaining contextual information over sequences.I started by stacking multiple LSTM layers, allowing the model to learn both simple and complex language patterns. This layered structure enabled the model to grasp everything from basic word associations to more nuanced meanings, crucial for accurately predicting customer intent. To prevent the model from overfitting, I incorporated dropout layers between the LSTM layers, randomly disabling some neurons during training. This approach helped ensure that the model could generalize well, rather than memorizing specific patterns in the training data.In addition, I used padding to make all input sequences the same length. This step was essential because customer queries vary in length; padding allowed the model to process them consistently, keeping its focus on content rather than structural inconsistencies.With the LSTM framework established, I proceeded with hyperparameter tuning. Using a random search strategy, I tested various configurations, fine-tuning parameters like the number of LSTM units, learning rate, batch size, and dropout rate. Adjusting these settings was like tuning an instrument, with each parameter contributing to a balance between learning speed, accuracy, and model robustness. I also optimized the embedding dimensions and window size in the Word2Vec model used for word embeddings. These embeddings captured word relationships, making the LSTM more effective at distinguishing intents based on contextual clues.

Experimenting with Bidirectional LSTM (BiLSTM)Finally, I experimented with a Bidirectional LSTM (BiLSTM) layer. Unlike traditional LSTMs, the BiLSTM processes information from both directions, allowing the model to understand words in the context of what comes before and after them. This bidirectional approach enhanced the model’s ability to interpret full intent more effectively, capturing subtle patterns across different intents within the banking domain. After rigorous training, this BiLSTM-based architecture, coupled with carefully tuned parameters, achieved the highest accuracy among the configurations tested, demonstrating its suitability for handling complex customer queries in banking.

Achieving Accuracy with Fine-Tuned Models: Results of Intent ClassificationThrough extensive experimentation and model tuning, the final BiLSTM model achieved impressive accuracy on the BANKING77 dataset. I tested various configurations and preprocessing strategies to find the optimal setup for accurately identifying customer intents in banking.After refining text handling and ensuring a balanced architecture, the final configuration with punctuation and stop words retained and a carefully adjusted BiLSTM achieved an accuracy of 85%. This result reflected the model’s ability to capture context effectively, distinguishing between nuanced banking queries with a high degree of precision.During training, both training and validation accuracy steadily increased over epochs, and loss values decreased consistently, showing that the model not only learned from the data but also generalized well to new examples. This final accuracy and low error rate confirmed the effectiveness of using BiLSTM architecture with tuned hyperparameters, creating a robust intent classification system that can handle the complex demands of customer interactions in banking.

Prediction of the Intent

Key Takeaways
Data-Driven Design with EDA:Exploratory data analysis revealed key patterns, such as the brevity of queries and frequently occurring bigrams. These insights informed the model's design, helping align it with typical customer behaviors in banking.
Targeted Preprocessing: Customized text preprocessing steps, including the retention of punctuation and stop words, preserved essential context in banking queries. This approach allowed the model to interpret subtle nuances in customer language, which is crucial for accurate intent classification.Word Embeddings for Enhanced Understanding: Using Word2Vec embeddings provided dense vector representations of words, enriching the model’s understanding of banking-specific terms and relationships—valuable for recognizing customer intents.Effective BiLSTM Model: The bidirectional LSTM (BiLSTM) architecture enabled the model to understand each query in its full context, improving classification accuracy. Processing queries both forwards and backwards helped capture nuanced relationships in customer intents.Strong and Reliable Performance: The final model achieved 85% accuracy with minimal overfitting, as shown by closely aligned training and validation metrics. This performance validates the model's robustness and its suitability for real-world banking applications.

Hello! I’m Vyshali

EducationGraduate Teaching Assistant
September 2024 - December 2024
GIS Data Analyst Co-op
July 2023 - December 2023
Marketing Data Analyst
December 2019 - January 2022

Work ExperienceMaster of Science
Data Analytics Engineering

September 2022 - December 2024
Bachelor of Science
Mathematics, Statistics and Computers

June 2016 - June 2019

CONTACT


[email protected]
+1(351)2200415