Merge pull request #809 from kbaah/kwabena_bootcamp

Kwabena Bootcamp
This commit is contained in:
Ed Donner
2025-10-23 09:28:09 -04:00
committed by GitHub

View File

@@ -0,0 +1,173 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fd1cdd6e",
"metadata": {},
"source": [
"## Week 2 - Full Prototype for Technical Questions Answerer"
]
},
{
"cell_type": "markdown",
"id": "70db9a0b",
"metadata": {},
"source": [
" This notebook will implement a Gradio UI, streaming, use of the system prompt to add expertise, and the ability to switch between models."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "df46689d",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7416a2a",
"metadata": {},
"outputs": [],
"source": [
"# Initialization\n",
"load_dotenv(override=True)\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4.1-mini\"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "86966749",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"\"\"\n",
"You are an expert technical question answerer specializing in data science, programming, \n",
"and software engineering. Your goal is to provide clear, accurate, and practical answers \n",
"to technical questions.\n",
"\n",
"When answering:\n",
"- Break down complex concepts into understandable explanations\n",
"- Provide code examples when relevant, with comments explaining key parts\n",
"- Mention common pitfalls or best practices\n",
"- If a question is ambiguous, state your assumptions or ask for clarification\n",
"- For debugging questions, explain both the fix and why the error occurred\n",
"- Cite specific documentation or resources when helpful\n",
"\n",
"Always prioritize accuracy and clarity over speed. If you're unsure about something, \n",
"acknowledge the uncertainty rather than guessing.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d34e5b81",
"metadata": {},
"outputs": [],
"source": [
"# Streaming chat funcion\n",
"def chat(model, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}]\n",
" for h in history:\n",
" messages.append({\"role\": h[\"role\"], \"content\": h[\"content\"]})\n",
"\n",
" stream = openai.chat.completions.create(\n",
" model=model, \n",
" messages=messages,\n",
" stream=True\n",
" )\n",
"\n",
" response = \"\"\n",
" for chunk in stream:\n",
" if chunk.choices[0].delta.content is not None:\n",
" response += chunk.choices[0].delta.content\n",
" yield history + [{\"role\": \"assistant\", \"content\": response}]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "32350869",
"metadata": {},
"outputs": [],
"source": [
"#Gradio Interface\n",
"with gr.Blocks() as ui:\n",
" with gr.Row():\n",
" chatbot = gr.Chatbot(height=500, type=\"messages\")\n",
" with gr.Row():\n",
" message = gr.Textbox(label=\"Chat with AI Assistant: \")\n",
" model_dropdown = gr.Dropdown(\n",
" choices=[\"gpt-4.1-mini\",\"gpt-4o-mini\", \"gpt-4o\", \"gpt-4-turbo\"], \n",
" value=\"gpt-4.1-mini\", \n",
" label=\"Select Model\"\n",
" ) \n",
"\n",
" def handle_submit(user_message, chat_history):\n",
" # Add user message to history\n",
" chat_history = chat_history + [{\"role\": \"user\", \"content\": user_message}]\n",
" return \"\", chat_history\n",
"\n",
" message.submit(\n",
" handle_submit, \n",
" inputs=[message, chatbot], \n",
" outputs=[message, chatbot]\n",
" ).then(\n",
" chat, \n",
" inputs=[model_dropdown, chatbot],\n",
" outputs=[chatbot]\n",
" )\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "markdown",
"id": "cf2b29e1",
"metadata": {},
"source": [
"### Concluding Remarks\n",
"In this exercise, we successfully built a working AI chatbot with Gradio that includes streaming responses and the ability to switch between different models. The implementation demonstrates how to create an interactive interface for LLM applications."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}