-
-
Notifications
You must be signed in to change notification settings - Fork 0
Implementation Details
Akhil Mahesh edited this page Mar 14, 2025
·
1 revision
-
User Module:
- Data Acquisition: Uses OpenCV to capture and preprocess gesture images.
- Gesture Classification: Implements real-time classification using a CNN built with TensorFlow/Keras.
-
Application UI Module:
- GUI Implementation: Developed in Tkinter to provide a responsive and intuitive interface.
- Feedback Mechanisms: Displays recognized characters, offers word suggestions, and supports text-to-speech conversion via pyttsx3.
- Programming Language: Python 3.9+
- Computer Vision: OpenCV and cvzone for hand detection.
- Deep Learning: TensorFlow/Keras for building and deploying the CNN model.
- Text-to-Speech: pyttsx3 for speech synthesis.
- GUI Framework: Tkinter for the user interface.
This Wiki serves as the central documentation hub for the Sign Language to Speech Conversion project.
For updates, discussions, or inquiries:
- π Report issues or request features: GitHub Issues
- π¬ Join the discussion: GitHub Discussions
- π€ Contribute to the project: Contribution Guidelines
For any additional questions, please contact the project maintainers through the repository.