Merge pull request #772 from kbaah/kwabena_bootcamp

kwabena_bootcamp
This commit is contained in:
Ed Donner
2025-10-21 21:20:57 -04:00
committed by GitHub

View File

@@ -0,0 +1,164 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4ea14045",
"metadata": {},
"source": [
"# End of Week 1 Exercise\n",
"\n",
"In this exercise, I'm building a small tool that takes a technical question and gets an explanation from **two models** — one from OpenAI and one from Ollama. \n",
"The idea is to compare how they respond and understand how to use both APIs.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18d3787e",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from openai import OpenAI\n",
"from dotenv import load_dotenv\n",
"from IPython.display import Markdown, display\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1592e306",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"\n",
"MODEL_GPT = \"gpt-4o-mini\"\n",
"MODEL_LLAMA = \"llama3.2\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "35da77ea",
"metadata": {},
"outputs": [],
"source": [
"# set up environment\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv(\"OPENAI_API_KEY\")\n",
"\n",
"if not api_key:\n",
" print(\"⚠️ OPENAI_API_KEY not found in environment. Please add it to your .env file.\")\n",
"else:\n",
" print(\"✅ API key loaded successfully\")\n",
"\n",
"client = OpenAI(api_key=api_key)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "67efa212",
"metadata": {},
"outputs": [],
"source": [
"# define the technical question\n",
"# (you can replace this text to ask something else)\n",
"\n",
"question = \"\"\"Please explain what this code does and why:\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\"\"\"\n",
"\n",
"print(\"Question:\", question)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85e1ac5b",
"metadata": {},
"outputs": [],
"source": [
"# Get gpt-4o-mini to answer\n",
"\n",
"print(\"🔹 GPT-4o-mini's answer:\\n\")\n",
"\n",
"response = client.chat.completions.create(\n",
" model=MODEL_GPT,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful Python tutor.\"},\n",
" {\"role\": \"user\", \"content\": question},\n",
" ],\n",
")\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4c031d74",
"metadata": {},
"outputs": [],
"source": [
"# Get LLaMA 3.2 to answer via local Ollama endpoint\n",
"\n",
"print(\"\\n🔹 LLaMA 3.2's answer:\\n\")\n",
"\n",
"ollama_client = OpenAI(base_url=\"http://localhost:11434/v1\",api_key=\"ollama\")\n",
"\n",
"response = ollama_client.chat.completions.create(\n",
" model=MODEL_LLAMA,\n",
" messages=[\n",
" {\"role\":\"system\",\"content\":\"You are a helpful AI tutor.\"},\n",
" {\"role\":\"user\",\"content\":question}\n",
" ],\n",
")\n",
"\n",
"print(response.choices[0].message.content)\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "e4ddf582",
"metadata": {},
"source": [
"### Reflection\n",
"\n",
"Both models provide explanations, but often with slightly different tones. \n",
"`gpt-4o-mini` tends to give more structured explanations, while `llama3.2` (running locally through Ollama) may be more concise or technical depending on its settings.\n",
"\n",
"This exercise helped me understand:\n",
"- How to send prompts and handle responses (including streaming).\n",
"- How easy it is to swap between OpenAI and local models.\n",
"- The value of comparing model outputs side by side.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}