Files
LLM_Engineering_OLD/week1/community-contributions/week1-exercise-ai-powered-data-science-tutor.ipynb
2025-07-16 17:07:33 +01:00

315 lines
13 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
{
"cells": [
{
"cell_type": "markdown",
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
"metadata": {},
"source": [
"# End of week 1 exercise\n",
"\n",
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
]
},
{
"cell_type": "code",
"execution_count": 94,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": 95,
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key found.\n"
]
}
],
"source": [
"# constants\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# check api key\n",
"if not api_key:\n",
" print(\"No API key was found!\")\n",
"else:\n",
" print(\"API key found.\")\n",
" \n",
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
"openai = OpenAI()\n",
"\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_LLAMA = 'llama3.2'"
]
},
{
"cell_type": "code",
"execution_count": 96,
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"🤖 Hi there! Im Gregory, your AI-powered tutor.\n",
"Feel free to ask me AI related technical questions — Im here to help!\n",
"For example, you can ask me how a piece of code works or anything else you're curious about.\n",
"\n",
"🤖 Please enter your question:\n",
" # get gpt-4o-mini to answer, with streaming def stream_gpt(question): stream = openai.chat.completions.create( model=MODEL_GPT, messages=question, stream=True ) response = \"\" display_handle = display(Markdown(\"\"), display_id=True) for chunk in stream: response += chunk.choices[0].delta.content or '' response = response.replace(\"```\",\"\").replace(\"markdown\", \"\") update_display(Markdown(response), display_id=display_handle.display_id)\n"
]
}
],
"source": [
"# here is the question; type over this to ask something new\n",
"\n",
"system_prompt = \"\"\"You are Gregory, a friendly and knowledgeable AI tutor specializing in technical topics, especially programming, computer science, and software engineering.\n",
"Your goal is to help users understand technical concepts clearly, provide accurate code explanations, and guide them through learning with patience and clarity.\n",
"\n",
"- Always use clear, conversational language suited for learners of varying levels.\n",
"- Break down complex ideas into digestible steps.\n",
"- Use code examples where appropriate, and comment your code for better understanding.\n",
"- If a user asks a vague question, ask clarifying questions before giving an answer.\n",
"- Be encouraging, supportive, and professional.\n",
"- When in doubt, prioritize helping the user build confidence in learning technical skills.\"\"\"\n",
"\n",
"user_prompt = input(\"\"\"🤖 Hi there! Im Gregory, your AI-powered tutor.\n",
"Feel free to ask me AI related technical questions — Im here to help!\n",
"For example, you can ask me how a piece of code works or anything else you're curious about.\\n\n",
"🤖 Please enter your question:\\n\"\"\")\n",
"\n",
"question=[\n",
" {\"role\":\"system\", \"content\":system_prompt}\n",
" , {\"role\":\"user\", \"content\":user_prompt}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 97,
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
"metadata": {},
"outputs": [],
"source": [
"# get gpt-4o-mini to answer, with streaming\n",
"def stream_gpt(question):\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL_GPT,\n",
" messages=question,\n",
" stream=True\n",
" )\n",
"\n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 98,
"id": "4772b3ae-0b90-42bd-b158-dedf1f340030",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"It looks like you're trying to implement a streaming response handler to interact with the OpenAI GPT-4o-mini model. I see that you want to receive streamed responses and display them dynamically. Let's break down your code step by step and clarify some aspects to ensure it works effectively.\n",
"\n",
"Here's an improved version of your function with comments for clarity:\n",
"\n",
"python\n",
"import openai\n",
"from IPython.display import display, Markdown, update_display\n",
"\n",
"# Replace 'MODEL_GPT' with your actual model name (e.g., \"gpt-3.5-turbo\").\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"\n",
"def stream_gpt(question):\n",
" # Create a streaming request to the OpenAI API with the specified model and user question.\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL_GPT,\n",
" messages=question,\n",
" stream=True\n",
" )\n",
" \n",
" # Initialize an empty response string to build the complete output.\n",
" response = \"\"\n",
" \n",
" # Create a display handle for Markdown output in Jupyter Notebook or similar environments.\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" \n",
" # Loop through each chunk of streamed response.\n",
" for chunk in stream:\n",
" # Retrieve the content of the current chunk and append it to the response string.\n",
" response += chunk.choices[0].delta.content or ''\n",
" \n",
" # Clean up response text to remove any unwanted Markdown formatting.\n",
" response = response.replace(\"\", \"\").replace(\"\", \"\")\n",
" \n",
" # Update the displayed text in real-time.\n",
" update_display(Markdown(response), display_id=display_handle.display_id)\n",
"\n",
"# To use this function, call it with a properly formatted question.\n",
"# Example of usage:\n",
"# stream_gpt([{\"role\": \"user\", \"content\": \"What's the weather like today?\"}])\n",
"\n",
"\n",
"### Key Points to Note:\n",
"1. **Streaming Behavior**: The `stream=True` parameter in the `openai.chat.completions.create` call allows you to get part of the response as its being generated instead of waiting for the entire completion.\n",
" \n",
"2. **Question Formatting**: Ensure to pass the `question` into the `messages` parameter as a list of dictionaries, where each dictionary contains the 'role' of the speaker (like 'user' or 'assistant') and the message content.\n",
"\n",
"3. **Updating Display**: Using `IPython.display` allows real-time updates of the Markdown output in environments like Jupyter notebooks.\n",
"\n",
"4. **Error Handling**: Consider adding error handling for HTTP errors or issues with the streaming process. This ensures that your function can gracefully handle problems.\n",
"\n",
"5. **Environment Compatibility**: This code works seamlessly in an interactive environment that supports IPython, such as Jupyter notebooks.\n",
"\n",
"Feel free to ask more questions if you need further clarification on any part of this code or if you want to expand its functionality!"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"stream_gpt(question)"
]
},
{
"cell_type": "code",
"execution_count": 99,
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
"metadata": {},
"outputs": [],
"source": [
"# get Llama 3.2 to answer\n",
"def stream_llama(question):\n",
" stream = ollama_via_openai.chat.completions.create(\n",
" model=MODEL_LLAMA,\n",
" messages=question,\n",
" stream=True\n",
" )\n",
"\n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 100,
"id": "c288d5b6-4e55-4a58-8e55-2abea1ae9e01",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"Hello there! It seems like you're working with the OpenAI GPT-4 model to generate human-like responses. The code snippet you provided is quite interesting, and I'll do my best to break it down for you.\n",
"\n",
"**What this code does**\n",
"\n",
"This `stream_gpt` function appears to be a wrapper around the OpenAI API, which generates text completions based on user input (you). Here's what the function does in detail:\n",
"\n",
"1. **Create GPT-4 model instance**: It creates an instance of the GPT-4 model using the `MODEL_GPT` variable, which suggests that this is a predefined model configuration.\n",
"2. **Open API stream**: It opens a connection to the OpenAI API's completions endpoint using the `openai.chat.completions.create` method, passing in the `model` parameter (the GPT-4 instance) and the `messages` parameter (your question).\n",
"\n",
" python\n",
"stream = openai.chat.completions.create(\n",
" model=MODEL_GPT,\n",
" messages=question,\n",
" stream=True\n",
")\n",
"\n",
"\n",
" The `stream=True` parameter is necessary because we want to read responses from the API in real-time without having to wait for the entire response to be received.\n",
"\n",
"3. **Process responses**: Inside an infinite loop (`forchunk in stream:`), it reads and processes each chunk of response from the API:\n",
"\n",
" python\n",
"for chunk in stream:\n",
"response += chunk.choices[0].delta.content or ''\n",
"\n",
"\n",
" - `chunk` is a dictionary-like object containing information about the API's response.\n",
" - `choices` is an array of possible completions, with only one choice shown (`[0]`) by default. We're assuming this is the primary completion we want to display.\n",
" - `.delta.content` gives us the actual text response from the API. This could be a full paragraph, sentence, or even just a word.\n",
" - `response += chunk.choices[0].delta.content or ''`: We simply append any remaining text from previous chunks if there was one.\n",
"\n",
"4. **Format and display**: It reformats the response to remove Markdown formatting (``)) and then uses a `display` function to show an updated version of the original question:\n",
"\n",
" python\n",
"response = response.replace(\"\", \"\").replace(\"\", \"\")\n",
"update_display(Markdown(response), display_id=display_handle.display_id)\n",
"\n",
"\n",
"5. **Update display**: After formatting, it updates the display with the latest response.\n",
"\n",
"**Issue concerns**\n",
"\n",
"One potential issue here: `while True` or a similar loop structure should be used instead of an `Infinite` loop for this streamer's functionality.\n",
"\n",
"Also, error handling would be necessary if we wanted more control over any possible errors while streaming results from API requests."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"stream_llama(question)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}