Shaping Perspectives Shifting Realities
A virtual waifu / assistant that you can speak to through your mic and it'll speak back to you! Has many features such as:
- You can speak to her with a mic
- It can speak back to you
- It can recognize gestures and perform appropriate actions based on it
- Has dynamic voice controls
- Has short-term memory and long-term memory
- Can open apps
- Smarter than you
- Multilingual
- Can learn psychology
- Can recognize different people
- Can sing,dance do lot of other stuff.
- Can control your smart home like Alexa
- Dynamic animations
More features I'm planning to add soon in the Roadmap. Also, here's a summary of how it works for those of you who want to know:
First the speech which is obtained from the mic is first passed into the OpenAI Whisper model for detecting the language of the speech and then its passed on to the speech recognizer, the response is then sent to the client via a websocket, the transcribed text is then sent to gemini LLM, and the response from Jenna is printed to the unity console and appended to conversation_log.json, and finally, the response is spoken by our finedtuned Parler TTS with the expressions for the face of the character and voice and the animation will be dynamically generated for the character by our own custom animation model which we are currently working on. For the time being we are using preset animations.
Created Based on our publication https://ieeexplore.ieee.org/document/10576146
- Python
- Langchain 3.0
- Whisper
- SpeechRecognition
- PocketSphinx
- Parler TTS
- HyperDB
- Sentence Transformers
- Tuya Cloud IoT
- Unity 2024
- Unity C#
- Rust
- Install Python 3.10.11 and set it as an environment variable in PATH
- Install GIT
- Install CUDA 11.7 if you have an Nvidia GPU
- Install Visual Studio Community 2022 and select
Desktop Development with C++
in the install options - Install Unity
- Install FFMPEG
1. Create a new project in unity with URP Settings
2. Replace the assets folder with the one in this branch
3. Open the Unity Scene from the scenes folder
4. Insert the API keys into the .env files
5. Run the STT service using: python STT.py
6. Launch the app by clicking on the play button
7. Wait for the application to download the required models on the first start
8. Say 'Start' to activate or 'Stop' to deactivate works like an OS
Compiled EXE version coming soon !!!
- Long-term memory
- Time and date awareness
- mixed reality integration
- Gatebox-style hologram
- Dynamic Voice like GPT4o
- Alexa-like smart home control
- Multilingual
- Mobile version
- Easier setup
- Compiling into one exe
- Localized
- Home Control
- Outfit Change
- Add custom skills
- Music Player
- Different modes
- Face Recognition
- Dynamic Animations
- OS like working
- Wakeword detection
- Mood recognition
- Weather Forecast
Distributed under the GNU Affero General Public License v3.0 License. See LICENSE.txt
for more information.
E-mail: support@neuralharbour.com
Project Link: https://github.com/NeuralHarbour/LLM-Based-3D-Avatar-Assistant/v2.0