This project introduces an IoT-enabled assistive glove for translating gestures from American Sign Language (ASL) into text and speech in real time. It bridges the communication gap between the deaf and hearing communities using innovative IoT, machine learning, and app development technologies.
- Real-time gesture recognition (static and dynamic).
- Flutter-based mobile app with text-to-speech and learning modules.
- High accuracy (97%) using Random Forest for gesture classification.
- Sensors: Flex sensors, MPU-6050 gyroscope, accelerometer.
- Processing: Arduino Nano for sensor data collection and Bluetooth transmission.
- Mobile App: Flutter-based app with FastAPI backend for real-time gesture recognition.
Access the mobile application here: Flutter Mobile App.
- Input Sensors:
- Flex Sensors: Track finger bending.
- Accelerometer: Measures device acceleration.
- Gyroscope: Tracks angular velocity.
- Target Variable: ASL gestures (e.g., "hello," "sorry").
- Samples: ~14,000 gesture points.
- Gestures: Includes static gestures (A-F) and dynamic gestures ("hello," "sorry").
- Data Distribution:
- "Yes" and "No" involve repetitive motions, prone to noise.
- Sensor variability highlights sensitivity and outliers.
- Flex sensors show variability due to external stimuli.
- Accelerometer data is centered around zero during static conditions.
- Gyroscope data shows minimal variation, reflecting stability in collection.
- Split: 80% training, 20% testing.
- Hyperparameter Tuning: GridSearchCV (3-fold cross-validation).
- Accuracy: 97% on the test set.
Metric | Value |
---|---|
Accuracy | 97% |
Precision | 98% |
Recall | 97% |
F1-Score | 98% |
Gesture | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
Awkward | 1.00 | 0.96 | 0.98 | 140 |
Bathroom | 1.00 | 1.00 | 1.00 | 149 |
Deaf | 1.00 | 1.00 | 1.00 | 136 |
Goodbye | 0.99 | 1.00 | 1.00 | 129 |
Hello | 0.97 | 0.98 | 0.98 | 170 |
The complete report is available in ModelTraining.ipynb
.