Skip to content

Usage Guide

Akhil Mahesh edited this page Mar 14, 2025 · 1 revision

After installation, follow these steps to operate the system:

  1. Launch the Application:
    Run Application.py to initialize the webcam and load the CNN model.

  2. Gesture Detection:
    The application captures real-time video. The hand detection module identifies the region of interest and processes the image.

  3. Classification and Output:

    • Text Conversion: Recognized gestures are converted into text and displayed on the UI.
    • Speech Output: The text is converted into speech through the TTS engine.
  4. User Interaction:

    • Suggestions and Corrections: The system provides word suggestions based on partial inputs.
    • Control Functions: Buttons allow users to speak the recognized sentence or clear the current input.

πŸ“Œ Getting Started

  1. Project Overview
  2. Features & Objectives
  3. Installation Guide
  4. Usage Guide

πŸ— Development & Architecture

  1. System Architecture & Design
  2. Implementation Details

πŸ›  Testing & Enhancements

  1. Testing & Quality Assurance
  2. Future Enhancements

🀝 Community & Contributions

  1. Contributing Guidelines
  2. Discussions & Support

πŸ“œ Legal & References

  1. License & References

πŸ“Œ Full Project Report:
πŸ“– Download Detailed Documentation

Clone this wiki locally