Skip to content

AndreRab/Spam-filter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spam-filter application

Pygame PyTorch Torchvision NLTK Issues

The Spam-filter application is designed to assist you in detecting spam in your emails.

Description

In today’s world, almost everyone uses email, and it's frustrating to waste time on irrelevant or unwanted messages. That's why most email services have a spam section. I became curious about how these systems determine what qualifies as spam and what doesn’t, leading me to conduct a study where I compared three different types of neural networks for this classification task.

My project is divided into two main components:

  1. A Pygame application, which you can use to predict whether an email is spam or ham.
  2. A Jupyter Notebook, where I compare the performance of different neural network architectures and describe the entire process of model training.

Feel free to explore the project and see if your emails are classified as spam or ham!

Table of Contents

Section Description
Running the application Instructions for running the Pygame application
Training Datasets Information about the datasets used for training models
Jupyter Notebook A detailed breakdown of the notebook, where it's located, and which models have been used

Running the application

  1. Install python interpreter How to make it, you can find here
  2. Clone the Repository: Open a terminal and clone the repository:
    git clone https://github.com/AndreRab/Spam-filter.git
  3. Navigate to the Project Directory: Change your directory to the project folder:
    cd Spam-filter
  4. Install the necessary libraries: Install the necessary libraries for properly application working:
    pip install -r requirements.txt
  5. Start the application
    python scripts/main.py

Training Datasets

For training datasets, I used the following sources: Spam Email Classification Dataset. In this dataset, there are only two columns: text and labels. Therefore, I didn't need to perform any preprocessing before training my models.

Jupyter Notebook

The notebook is located in the research folder. There, you can see that I first created a vocabulary, where each word is assigned a unique ID. If a word is not in the vocabulary, it is assigned an unknown token.

Next, I defined three different architectures using the following blocks: vanilla RNN, GRU, and LSTM. All of them achieved good results, but the best-performing model was the one using GRU blocks. You can also find a plot showing its learning process.

image