-
Voice-Assistant is a ML-based Voice Assistant
-
This is a desktop-based voice assistant powered by the ML model, which is built using Python.
-
The UI is built using Electron.js while inference is done by using Node.js.
-
It understands your speech, processes it using a custom-trained intent recognition model and executes platform-specific commands.
-
🎙️ Voice Input
-
🤖 ML-based Intent Detection
-
⚙️ Cross-platform Command Execution
-
🗣️ Voice Output
-
🖥️ Desktop-first App
-
🔐 Privacy-Focused
Layer | Stack |
---|---|
Frontend | HTML, CSS, JS and React.js |
Backend | Node.js and Python |
Model | Sci-kit Learn ML models |
Voice I/O | Web Speech API |
-
🎙️ User speaks a command
-
🎧 Voice is transcribed and sent to Python backend
-
🧠 The ML model classifies intent
-
💻 Executes the platform-specific command
-
🗣️ Response is spoken back to the user alongwith the execution of the task
-
Node.js & npm
-
Python & pip
-
Electron.js & React
git clone "https://github.com/hrutavmodha/voice-assistant.git"
cd ./voice-assistant
cd ./backend/model/model1
python ./train.py
cd ../model2 && python ./train.py
cd ../model3 && python ./train.py
cd ../model4 && python ./train.py
cd ..
npm install
pip install sklearn joblib
cd ../frontend
npm install
cd ..
npm install
cd backend
npm run start
python -m frontend.chat.app
Made with passion by Hrutav Modha
-
After running the app.py, wait for some time, the browser will also get opened automatically at the URL http://localhost:3000
-
This project is purely offline and does not rely on any of the external APIs and nor it is dumb
if-elif-else
-like rule-based voice-assistant -
It’s a fully local voice assistant with a custom-trained AI model, which is designed with keeping privacy in mind.