Files
LLM_Engineering_OLD/week3/community-contributions/emmy/emmy_exercise_week3.ipynb
2025-10-28 10:34:36 +01:00

51 lines
1.6 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "af388e13",
"metadata": {},
"source": [
"# 🌀 VoiceShift Hybrid\n",
"\n",
"This Colab project builds a **hybrid text transformation pipeline** that combines **OpenAI GPT** and a **Hugging Face model** to summarize and restyle text.\n",
"\n",
"---\n",
"\n",
"## ⚙️ Pipeline\n",
"1. **Summarize** with GPT (`gpt-4o-mini`) — concise and factual summary. \n",
"2. **Rewrite tone/style** with Hugging Face (`mistralai/Mistral-7B-Instruct-v0.1`). \n",
"3. **Streamed output** displayed live through Gradio UI.\n",
"\n",
"---\n",
"\n",
"## 🧠 Highlights\n",
"- Combines **frontier + open-source models** in one workflow. \n",
"- Supports **4-bit quantized loading** (fallback to fp16/fp32). \n",
"- Simple **Gradio interface** with real-time GPT streaming. \n",
"\n",
"---\n",
"\n",
"## 📘 Notebook\n",
"👉 [Open in Google Colab](https://colab.research.google.com/drive/1ZRPHKe9jg6nf1t7zIe2jjpUULl38jPOJ?usp=sharing)\n",
"\n",
"---\n",
"\n",
"## 🧩 Tech Stack\n",
"`OpenAI API · Transformers · BitsAndBytes · Torch · Gradio`\n",
"\n",
"---\n",
"\n",
"## 💡 Summary\n",
"**VoiceShift Hybrid** shows how a **frontier model ensures accuracy** while a **local model personalizes style**, achieving both **precision and creativity** in one simple, efficient pipeline.\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}