The Electrocardiogram (ECG5000) Classifier is a Deep Learning model designed to classify ECG signals using a custom dataset derived from the ECG5000 dataset. This project leverages advanced neural network architectures, including convolutional layers for feature extraction, a Transformer encoder for sequence processing, and a multi-layer perceptron (MLP) for final classification. The model is implemented using PyTorch and optimized for performance on GPU.
A custom dataset is implemented to prepare data from text files. The preprocessing steps include:
- Data Reading: The ECG data is read from multiple text files, each containing time-series data.
- Normalization: Each ECG signal is normalized to ensure consistent scaling across samples.
- Transformation: Data is transformed into a format suitable for input into convolutional and Transformer layers, ensuring that the shape and dimensions align with the model requirements.
The architecture of the ECG5000 Classifier is designed to effectively extract features from ECG signals and classify them into predefined categories:
-
Convolutional Layers:
Three Conv1d layers are utilized with filter sizes of 128, 256, and 512. These layers enrich the data by capturing local patterns in the time-series signals.
-
Transformer Encoder:
Following the convolutional layers, an encoder part of Transformers is employed, consisting of 8 encoder layers. Each encoder layer comprises 8 self-attention heads, which enable the model to capture complex dependencies within the ECG signals.
-
Fully Connected Layer:
The output from the Transformer encoder is fed into a fully connected layer (MLP) that classifies the processed signals into their respective categories.
-
Optimizer:
The model is optimized through Stochastic Gradient Descent (SGD), incorporating both momentum and weight decay to enhance training efficiency and overall performance. The training process is structured into two distinct phases, as outlined below:
- Momentum: 0.9
- Weight Decay: 1e-5
- First Step:
- Epochs: 5
- Learning Rate: 0.01
- Second Step:
- Epochs: 7
- Learning Rate: 0.001
- NVIDIA GEFORCE GTX-960M
- 4GB Memory
- 640 CUDA Cores
After training, the model achieved:
- Test Cross-Entropy Loss: 0.18
- Test Accuracy: 93.98%
Thank you for your interest in this project! If you have any questions or suggestions, feel free to reach out.
Email: yassingourkani@outlook.com
LinkedIn: Yasin LinkedIn