# ๐Ÿง  Agentic Voice/Text Support Chatbot A multimodal chatbot interface with support for **text and voice input**, **multiple large language models (LLMs)**, and **context memory persistence** โ€” all in a single Gradio-based GUI. ## ๐Ÿš€ Features - ๐Ÿ”„ **Multi-LLM switching**: Dynamically switch between OpenAI, Anthropic Claude, and Meta LLaMA (via Ollama) - ๐ŸŽค **Voice input**: Use your microphone with live speech-to-text transcription - ๐Ÿ’ฌ **Contextual memory**: Maintain chat history even when switching models - ๐Ÿงช **Prototype-ready**: Built with Gradio for rapid GUI testing and development ## ๐Ÿ› ๏ธ Technologies Used - [Gradio](https://www.gradio.app/) โ€“ GUI interface - [OpenAI API](https://platform.openai.com/) - [Anthropic Claude API](https://www.anthropic.com/) - [Ollama](https://ollama.com/) โ€“ Local LLaMA inference - [`speech_recognition`](https://pypi.org/project/SpeechRecognition/) โ€“ Voice-to-text - `sounddevice`, `numpy` โ€“ Audio recording - `.env` โ€“ Environment variable management ## Youโ€™ll also need: - API keys for OpenAI and Claude - Ollama installed locally to run LLaMA models - A .env file with the necessary API keys