Merge pull request #733 from javieroteromarquez/jom/week1
Added week 1 exercise with enum and small prompting for AI Tutor
This commit is contained in:
225
week1/community-contributions/week1_exercise_jom.ipynb
Normal file
225
week1/community-contributions/week1_exercise_jom.ipynb
Normal file
@@ -0,0 +1,225 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# End of week 1 exercise\n",
|
||||
"\n",
|
||||
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
|
||||
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c1070317-3ed9-4659-abe3-828943230e03",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# imports\n",
|
||||
"import os\n",
|
||||
"from openai import OpenAI\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"from IPython.display import display, Markdown, update_display\n",
|
||||
"\n",
|
||||
"load_dotenv(override=True)\n",
|
||||
"api_key = os.getenv('OPENAI_API_KEY')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# here is the question; type over this to ask something new\n",
|
||||
"\n",
|
||||
"question = \"\"\"\n",
|
||||
"Please explain what this code does and why:\n",
|
||||
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
|
||||
"\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "16ec5d8a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"system_prompt = \"\"\"\n",
|
||||
"You are a helpful tutor that explains code. You need to provide an answer structured in markdown without code blocksinto the following parts:\n",
|
||||
"- Identify the topic of the question (so the user can look for more info)\n",
|
||||
"- Give an ELI5 explanation of the question.\n",
|
||||
"- Give a step by step explanation of the code.\n",
|
||||
"- Ask the user a follow up question or variation of the question to see if they understand the concept.\n",
|
||||
"- Give the answer to the followup question as a spoiler\n",
|
||||
"\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Get gpt-4o-mini to answer, with streaming\n",
|
||||
"\n",
|
||||
"from enum import StrEnum\n",
|
||||
"\n",
|
||||
"class Enum_Model(StrEnum):\n",
|
||||
" GPT = 'gpt-4o-mini'\n",
|
||||
" LLAMA = 'llama3.2:1b'\n",
|
||||
"\n",
|
||||
"def ask_tutor(question, model: str | Enum_Model):\n",
|
||||
" if isinstance(model, str):\n",
|
||||
" try:\n",
|
||||
" model = Enum_Model[model.upper()]\n",
|
||||
" except KeyError:\n",
|
||||
" raise ValueError(f\"Unknown model: {model}\")\n",
|
||||
" if model == Enum_Model.LLAMA:\n",
|
||||
" LLM_ENDPOINT=\"http://localhost:11434/v1\"\n",
|
||||
" client = OpenAI(base_url=LLM_ENDPOINT)\n",
|
||||
" elif model == Enum_Model.GPT:\n",
|
||||
" client = OpenAI()\n",
|
||||
" else:\n",
|
||||
" raise ValueError(f\"Unknown model: {model}\")\n",
|
||||
" stream = client.chat.completions.create(\n",
|
||||
" model=model,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
||||
" {\"role\": \"user\", \"content\": question}],\n",
|
||||
" stream=True\n",
|
||||
" )\n",
|
||||
" response = \"\"\n",
|
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n",
|
||||
" for chunk in stream:\n",
|
||||
" response += chunk.choices[0].delta.content or ''\n",
|
||||
" update_display(Markdown(response), display_id=display_handle.display_id)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "b0c30dbf",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"## Explanation of the Code\n",
|
||||
"\n",
|
||||
"This piece of code is using a concept called \"generators\" in Python. The function `yield from` is used to generate a sequence of values on-the-fly, without needing to store them all in memory at once.\n",
|
||||
"\n",
|
||||
"### ELI5 Explanation:\n",
|
||||
"\n",
|
||||
"Think of it like this: Imagine you're trying to create a long list of authors for books. If you were to write the entire list out and then return it, you might run into a problem because Python doesn't know where to put it until you actually read it. That process can be slow.\n",
|
||||
"\n",
|
||||
"This type of code is like writing the list once and giving it to your friends who can then use it wherever they need. Instead of having all that data inside one giant list in its entirety, this generator creates a small amount of data each time it's needed (like when someone calls `get(\"author\")` on an author object). It's called a \"generator\" because it generates values rather than computing them and storing them all at once.\n",
|
||||
"\n",
|
||||
"### Code Explanation:\n",
|
||||
"\n",
|
||||
"This code is defining a function that takes in another function as an argument (`books`) but doesn't use its return value directly. Instead, it uses Python's `yield from` feature to call the inner function (`book.get(\"author\") for book in books`) repeatedly until it encounters a stop signal.\n",
|
||||
"\n",
|
||||
"Here's what the code does step by step:\n",
|
||||
"1. It starts with defining an empty sequence variable `result` that will hold our answer (because this is going to be another generator).\n",
|
||||
"2. It opens the `books` collection, which presumably contains a list of book objects.\n",
|
||||
"3. It then uses an expression to iterate over each book in the `books` collection and call the anonymous function (`book.get(\"author\")`) on it.\n",
|
||||
"4. The result of this iteration is what's assigned to our sequence variable `result`.\n",
|
||||
"5. `result` is an iterator, and when we do `yield from`, Python switches control to the generator inside `books`.\n",
|
||||
"6. This happens until someone calls the `stop()` function on the outer generator (perhaps as a way to explicitly exit the generation process) or if there's no more data left in the collection.\n",
|
||||
"7. At that point, Python returns control back to our inner function (`book.get(\"author\")`), which now has its own sequence variable and can keep generating values until you call `stop()` on it.\n",
|
||||
"\n",
|
||||
"### Follow-up Question:\n",
|
||||
"\n",
|
||||
"How would this code be used if there were already some data stored elsewhere on the computer?\n",
|
||||
"\n",
|
||||
"SPOILER ALERT:\n",
|
||||
"\n",
|
||||
"The answer is that this type of code doesn't actually store anything in memory. Instead, it's just using a trick to tell Python (and JavaScript) which values we'd like it to generate based on an iterable (`books`), without having to store those ones in its actual database or cache beforehand."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"ask_tutor(question, 'LLAMA')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"### Topic: Python Generators and Set Comprehensions\n",
|
||||
"\n",
|
||||
"### ELI5 Explanation\n",
|
||||
"Imagine you have a big box of books, and each book has different information like the title and the author. You want to make a special list of all the authors of the books you have, but you only want to write down the names once, even if the same author wrote multiple books. This code snippet is doing just that! It's like saying, \"Hey! For all the books in this box, find out who the authors are and tell me their names, but don’t tell me any duplicates.\"\n",
|
||||
"\n",
|
||||
"### Step by Step Explanation of the Code\n",
|
||||
"1. **Book Collection**: The variable `books` is expected to be a collection (like a list) of dictionaries. Each dictionary represents a book with various attributes like \"author,\" \"title,\" etc.\n",
|
||||
" \n",
|
||||
"2. **List Comprehension**: The code uses a set comprehension `{...}` to create a set of authors.\n",
|
||||
" - Inside the braces `{}`, it iterates over each `book` in the collection `books`.\n",
|
||||
" - For each `book`, it retrieves the author's name using `book.get(\"author\")`. The `get` method is safe—it won’t cause an error if the key \"author\" doesn’t exist; it just returns `None`.\n",
|
||||
"\n",
|
||||
"3. **Filtering**: The `if book.get(\"author\")` part ensures that only books with a valid author are considered. It filters out any books that don’t have an author specified.\n",
|
||||
"\n",
|
||||
"4. **Yielding Results**: The `yield from` statement is used to yield each element from the set of authors (which automatically handles duplicates by using the properties of a set) back to wherever this generator function is called. This means it will return each author's name one by one when called upon.\n",
|
||||
"\n",
|
||||
"### Follow-up Question\n",
|
||||
"What would happen if you used a list comprehension instead of a set comprehension in this code? Would you still get unique authors?\n",
|
||||
"\n",
|
||||
"### Answer to the Follow-up Question: \n",
|
||||
"<details>\n",
|
||||
"<summary>Click to see the answer</summary>\n",
|
||||
"No, using a list comprehension would collect all authors, including duplicates. You would get a list of authors that may have repeated names if multiple books were written by the same author.\n",
|
||||
"</details>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"ask_tutor(question, 'GPT')\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
Reference in New Issue
Block a user