* Add week 2 day 1 * Add week 2 day 2 * Add week 2 day 3 * Add week 2 day 4 * Add week 2 day 5 * Add week 2 exercise solution
1799 lines
58 KiB
Plaintext
1799 lines
58 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Welcome to Week 2!\n",
|
||
"\n",
|
||
"## Frontier Model APIs\n",
|
||
"\n",
|
||
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
|
||
"\n",
|
||
"Today we'll connect with them through their APIs.."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
|
||
"metadata": {},
|
||
"source": [
|
||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||
" <tr>\n",
|
||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||
" <img src=\"../assets/important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||
" </td>\n",
|
||
" <td>\n",
|
||
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n",
|
||
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
|
||
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
|
||
" First do a git pull and merge your changes as needed</a>. Check out the GitHub guide for instructions. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/>\n",
|
||
" </span>\n",
|
||
" </td>\n",
|
||
" </tr>\n",
|
||
"</table>\n",
|
||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||
" <tr>\n",
|
||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||
" <img src=\"../assets/resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||
" </td>\n",
|
||
" <td>\n",
|
||
" <h2 style=\"color:#f71;\">Reminder about the resources page</h2>\n",
|
||
" <span style=\"color:#f71;\">Here's a link to resources for the course. This includes links to all the slides.<br/>\n",
|
||
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
|
||
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
|
||
" </span>\n",
|
||
" </td>\n",
|
||
" </tr>\n",
|
||
"</table>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Setting up your keys - OPTIONAL!\n",
|
||
"\n",
|
||
"We're now going to try asking a bunch of models some questions!\n",
|
||
"\n",
|
||
"This is totally optional. If you have keys to Anthropic, Gemini or others, then you can add them in.\n",
|
||
"\n",
|
||
"If you'd rather not spend the extra, then just watch me do it!\n",
|
||
"\n",
|
||
"For OpenAI, visit https://openai.com/api/ \n",
|
||
"For Anthropic, visit https://console.anthropic.com/ \n",
|
||
"For Google, visit https://ai.google.dev/gemini-api \n",
|
||
"For DeepSeek, visit https://platform.deepseek.com/ \n",
|
||
"For Groq, visit https://console.groq.com/ \n",
|
||
"For Grok, visit https://console.x.ai/ \n",
|
||
"\n",
|
||
"\n",
|
||
"You can also use OpenRouter as your one-stop-shop for many of these! OpenRouter is \"the unified interface for LLMs\":\n",
|
||
"\n",
|
||
"For OpenRouter, visit https://openrouter.ai/ \n",
|
||
"\n",
|
||
"\n",
|
||
"With each of the above, you typically have to navigate to:\n",
|
||
"1. Their billing page to add the minimum top-up (except Gemini, Groq, Google, OpenRouter may have free tiers)\n",
|
||
"2. Their API key page to collect your API key\n",
|
||
"\n",
|
||
"### Adding API keys to your .env file\n",
|
||
"\n",
|
||
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
|
||
"\n",
|
||
"```\n",
|
||
"OPENAI_API_KEY=xxxx\n",
|
||
"ANTHROPIC_API_KEY=xxxx\n",
|
||
"GOOGLE_API_KEY=xxxx\n",
|
||
"DEEPSEEK_API_KEY=xxxx\n",
|
||
"GROQ_API_KEY=xxxx\n",
|
||
"GROK_API_KEY=xxxx\n",
|
||
"OPENROUTER_API_KEY=xxxx\n",
|
||
"```\n",
|
||
"\n",
|
||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||
" <tr>\n",
|
||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||
" <img src=\"../assets/important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||
" </td>\n",
|
||
" <td>\n",
|
||
" <h2 style=\"color:#900;\">Any time you change your .env file</h2>\n",
|
||
" <span style=\"color:#900;\">Remember to Save it! And also rerun load_dotenv(override=True)<br/>\n",
|
||
" </span>\n",
|
||
" </td>\n",
|
||
" </tr>\n",
|
||
"</table>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# imports\n",
|
||
"\n",
|
||
"import os\n",
|
||
"import requests\n",
|
||
"from dotenv import load_dotenv\n",
|
||
"from openai import OpenAI\n",
|
||
"from IPython.display import Markdown, display"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "b0abffac",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"OpenAI API Key exists and begins sk-proj-\n",
|
||
"Anthropic API Key not set (and this is optional)\n",
|
||
"Google API Key exists and begins AI\n",
|
||
"DeepSeek API Key not set (and this is optional)\n",
|
||
"Groq API Key not set (and this is optional)\n",
|
||
"Grok API Key not set (and this is optional)\n",
|
||
"OpenRouter API Key not set (and this is optional)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"load_dotenv(override=True)\n",
|
||
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
|
||
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
|
||
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
|
||
"deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n",
|
||
"groq_api_key = os.getenv('GROQ_API_KEY')\n",
|
||
"grok_api_key = os.getenv('GROK_API_KEY')\n",
|
||
"openrouter_api_key = os.getenv('OPENROUTER_API_KEY')\n",
|
||
"\n",
|
||
"if openai_api_key:\n",
|
||
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
|
||
"else:\n",
|
||
" print(\"OpenAI API Key not set\")\n",
|
||
" \n",
|
||
"if anthropic_api_key:\n",
|
||
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
|
||
"else:\n",
|
||
" print(\"Anthropic API Key not set (and this is optional)\")\n",
|
||
"\n",
|
||
"if google_api_key:\n",
|
||
" print(f\"Google API Key exists and begins {google_api_key[:2]}\")\n",
|
||
"else:\n",
|
||
" print(\"Google API Key not set (and this is optional)\")\n",
|
||
"\n",
|
||
"if deepseek_api_key:\n",
|
||
" print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n",
|
||
"else:\n",
|
||
" print(\"DeepSeek API Key not set (and this is optional)\")\n",
|
||
"\n",
|
||
"if groq_api_key:\n",
|
||
" print(f\"Groq API Key exists and begins {groq_api_key[:4]}\")\n",
|
||
"else:\n",
|
||
" print(\"Groq API Key not set (and this is optional)\")\n",
|
||
"\n",
|
||
"if grok_api_key:\n",
|
||
" print(f\"Grok API Key exists and begins {grok_api_key[:4]}\")\n",
|
||
"else:\n",
|
||
" print(\"Grok API Key not set (and this is optional)\")\n",
|
||
"\n",
|
||
"if openrouter_api_key:\n",
|
||
" print(f\"OpenRouter API Key exists and begins {openrouter_api_key[:3]}\")\n",
|
||
"else:\n",
|
||
" print(\"OpenRouter API Key not set (and this is optional)\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "985a859a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Connect to OpenAI client library\n",
|
||
"# A thin wrapper around calls to HTTP endpoints\n",
|
||
"\n",
|
||
"openai = OpenAI()\n",
|
||
"\n",
|
||
"# For Gemini, DeepSeek and Groq, we can use the OpenAI python client\n",
|
||
"# Because Google and DeepSeek have endpoints compatible with OpenAI\n",
|
||
"# And OpenAI allows you to change the base_url\n",
|
||
"\n",
|
||
"anthropic_url = \"https://api.anthropic.com/v1/\"\n",
|
||
"gemini_url = \"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
|
||
"deepseek_url = \"https://api.deepseek.com\"\n",
|
||
"groq_url = \"https://api.groq.com/openai/v1\"\n",
|
||
"grok_url = \"https://api.x.ai/v1\"\n",
|
||
"openrouter_url = \"https://openrouter.ai/api/v1\"\n",
|
||
"ollama_url = \"http://localhost:11434/v1\"\n",
|
||
"\n",
|
||
"anthropic = OpenAI(api_key=anthropic_api_key, base_url=anthropic_url)\n",
|
||
"gemini = OpenAI(api_key=google_api_key, base_url=gemini_url)\n",
|
||
"deepseek = OpenAI(api_key=deepseek_api_key, base_url=deepseek_url)\n",
|
||
"groq = OpenAI(api_key=groq_api_key, base_url=groq_url)\n",
|
||
"grok = OpenAI(api_key=grok_api_key, base_url=grok_url)\n",
|
||
"openrouter = OpenAI(base_url=openrouter_url, api_key=openrouter_api_key)\n",
|
||
"ollama = OpenAI(api_key=\"ollama\", base_url=ollama_url)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "16813180",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"tell_a_joke = [\n",
|
||
" {\"role\": \"user\", \"content\": \"Tell a joke for a student on the journey to becoming an expert in LLM Engineering\"},\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "23e92304",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"Sure! Here's a joke for an aspiring LLM Engineer:\n",
|
||
"\n",
|
||
"Why did the LLM Engineer bring a ladder to the training session?\n",
|
||
"\n",
|
||
"Because they wanted to reach the next level of \"model\" performance!"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = openai.chat.completions.create(model=\"gpt-4.1-mini\", messages=tell_a_joke)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "e03c11b9",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"response = anthropic.chat.completions.create(model=\"claude-sonnet-4-5-20250929\", messages=tell_a_joke)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ab6ea76a",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Training vs Inference time scaling"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "afe9e11c",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"easy_puzzle = [\n",
|
||
" {\"role\": \"user\", \"content\": \n",
|
||
" \"You toss 2 coins. One of them is heads. What's the probability the other is tails? Answer with the probability only.\"},\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "4a887eb3",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"1/2"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = openai.chat.completions.create(model=\"gpt-5-nano\", messages=easy_puzzle, reasoning_effort=\"minimal\")\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "5f854d01",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"2/3"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = openai.chat.completions.create(model=\"gpt-5-nano\", messages=easy_puzzle, reasoning_effort=\"low\")\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "f45fc55b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"2/3"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = openai.chat.completions.create(model=\"gpt-5-mini\", messages=easy_puzzle, reasoning_effort=\"minimal\")\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ca713a5c",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Testing out the best models on the planet"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "df1e825b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"hard = \"\"\"\n",
|
||
"On a bookshelf, two volumes of Pushkin stand side by side: the first and the second.\n",
|
||
"The pages of each volume together have a thickness of 2 cm, and each cover is 2 mm thick.\n",
|
||
"A worm gnawed (perpendicular to the pages) from the first page of the first volume to the last page of the second volume.\n",
|
||
"What distance did it gnaw through?\n",
|
||
"\"\"\"\n",
|
||
"hard_puzzle = [\n",
|
||
" {\"role\": \"user\", \"content\": hard}\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"id": "8f6a7827",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"Interpret the placement and thicknesses.\n",
|
||
"\n",
|
||
"- Each volume has pages total thickness 2 cm.\n",
|
||
"- Each cover thickness: 2 mm = 0.2 cm.\n",
|
||
"- They are on a shelf side by side: [ cover1 ] [ pages1 ] [ cover2 ] [ pages2 ] in some order? Actually when books stand upright, the order from left to right is: cover of first volume (spine facing out, but thickness along shelf is the front cover), then pages, then back cover; then immediately next book with its front cover, pages, back cover.\n",
|
||
"\n",
|
||
"But note the worm goes from the first page of the first volume to the last page of the second volume, perpendicular to the pages. That means it starts at the very first page of volume 1 (the side adjacent to the front cover) and ends at the very last page of volume 2 (adjacent to its back cover). We need the straight-line distance through the intervening material along the shelf, i.e., how much thickness of material it passes through along the line of the shelf.\n",
|
||
"\n",
|
||
"Crucial arrangement: The two books are placed side by side in order: [V1] followed by [V2]. The thicknesses along the shelf are (for each book): front cover 0.2 cm, pages 2.0 cm, back cover 0.2 cm. So each book total thickness is 2.4 cm.\n",
|
||
"\n",
|
||
"If the worm travels from the first page of V1 (which is just after the front cover of V1) to the last page of V2 (which is just before the back cover of V2), then the worm traverses:\n",
|
||
"- inside V1: from the first page to the back cover? Wait \"from the first page of the first volume\" means just after the front cover, going toward the back. The last page of the second volume is just before the back cover, i.e., just before the back cover of V2. So the path goes through:\n",
|
||
"1) the rest of V1 pages (2.0 cm minus maybe some negligible starting boundary). It starts at the first page, so it must go through the remainder of V1 pages: that's 2.0 cm of pages because the first page is at the very start of the pages block; going to the back cover would include passing through the remaining pages (2.0 cm total pages, but starting at first page means you still pass through full 2.0 cm of pages). Actually from the first page to the last page you still traverse the entire pages thickness of V1: 2.0 cm.\n",
|
||
"2) then through the back cover of V1: 0.2 cm.\n",
|
||
"3) then through the space between books? If the books are touching, there is no gap. The worm continues into the front cover or pages of V2. It must go through the front cover of V2 (0.2 cm) and then through the pages of V2 up to the last page (but it ends at the last page, which is just before the back cover, so it traverses the entire pages block of V2: 2.0 cm) or does it stop at last page meaning not including back cover. So it traverses the entire pages thickness of V2: 2.0 cm.\n",
|
||
"4) It does not need to go through the back cover of V2 since it ends at last page.\n",
|
||
"\n",
|
||
"Total distance = V1 pages 2.0 + V1 back cover 0.2 + V2 front cover 0.2 + V2 pages 2.0 = 4.4 cm.\n",
|
||
"\n",
|
||
"Convert to mm: 44 mm.\n",
|
||
"\n",
|
||
"Answer: 4.4 cm (44 mm)."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = openai.chat.completions.create(model=\"gpt-5-nano\", messages=hard_puzzle, reasoning_effort=\"minimal\")\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "d693ac0d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"response = anthropic.chat.completions.create(model=\"claude-sonnet-4-5-20250929\", messages=hard_puzzle)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"id": "7de7818f",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"4 mm.\n",
|
||
"\n",
|
||
"Reason: On a shelf with spines outward, the first page of volume 1 is just inside its front cover (which faces volume 2), and the last page of volume 2 is just inside its back cover (which faces volume 1). So the worm passes only through those two covers: 2 mm + 2 mm = 4 mm."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = openai.chat.completions.create(model=\"gpt-5\", messages=hard_puzzle)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"id": "de1dc5fa",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"This is a classic riddle that plays on our assumptions about how books are arranged. Here is the step-by-step solution:\n",
|
||
"\n",
|
||
"The distance the worm gnawed is **4 mm**.\n",
|
||
"\n",
|
||
"Here’s why:\n",
|
||
"\n",
|
||
"1. **Visualize the books on the shelf.** The volumes are standing side by side in the correct order: Volume 1 is on the left, and Volume 2 is on the right.\n",
|
||
"\n",
|
||
"2. **Consider the orientation of the pages and covers.** For a book in English (or Russian), the first page is on the right side of the book block (right behind the front cover), and the last page is on the left side of the book block (just before the back cover).\n",
|
||
"\n",
|
||
"3. **Picture the layout from left to right on the shelf:**\n",
|
||
" * First, you have the **front cover of Volume 1**.\n",
|
||
" * Then, the **pages of Volume 1**.\n",
|
||
" * Then, the **back cover of Volume 1**.\n",
|
||
" * Touching that is the **front cover of Volume 2**.\n",
|
||
" * Then, the **pages of Volume 2**.\n",
|
||
" * Finally, the **back cover of Volume 2**.\n",
|
||
"\n",
|
||
" Wait, that's incorrect. Let's place them on the shelf correctly:\n",
|
||
"\n",
|
||
" * [**Back Cover** of Vol 1] [Pages of Vol 1] [**Front Cover** of Vol 1] ← Volume 1\n",
|
||
" * [**Back Cover** of Vol 2] [Pages of Vol 2] [**Front Cover** of Vol 2] ← Volume 2\n",
|
||
"\n",
|
||
" So, on the shelf, the order of the parts from left to right is:\n",
|
||
" * Back Cover of Vol 1\n",
|
||
" * Pages of Vol 1\n",
|
||
" * **Front Cover of Vol 1**\n",
|
||
" * **Back Cover of Vol 2**\n",
|
||
" * Pages of Vol 2\n",
|
||
" * Front Cover of Vol 2\n",
|
||
"\n",
|
||
"4. **Trace the worm's path.**\n",
|
||
" * The worm starts at the **first page of the first volume**. This page is right next to the **front cover of Volume 1**.\n",
|
||
" * The worm ends at the **last page of the second volume**. This page is right next to the **back cover of Volume 2**.\n",
|
||
"\n",
|
||
" Looking at the layout, the worm only needs to travel through the two covers that are in the middle, between the main block of pages of each volume.\n",
|
||
"\n",
|
||
"5. **Calculate the distance.**\n",
|
||
" * The worm gnaws through the **front cover of Volume 1** (2 mm).\n",
|
||
" * The worm gnaws through the **back cover of Volume 2** (2 mm).\n",
|
||
"\n",
|
||
"The total distance is the sum of the thicknesses of these two covers:\n",
|
||
"2 mm + 2 mm = **4 mm**.\n",
|
||
"\n",
|
||
"The 2 cm thickness of the pages is extra information designed to mislead you. The worm never actually gnaws through the block of pages of either volume."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = gemini.chat.completions.create(model=\"gemini-2.5-pro\", messages=hard_puzzle)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9a9faf98",
|
||
"metadata": {},
|
||
"source": [
|
||
"## A spicy challenge to test the competitive spirit"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"id": "fc1824ad",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"dilemma_prompt = \"\"\"\n",
|
||
"You and a partner are contestants on a game show. You're each taken to separate rooms and given a choice:\n",
|
||
"Cooperate: Choose \"Share\" — if both of you choose this, you each win $1,000.\n",
|
||
"Defect: Choose \"Steal\" — if one steals and the other shares, the stealer gets $2,000 and the sharer gets nothing.\n",
|
||
"If both steal, you both get nothing.\n",
|
||
"Do you choose to Steal or Share? Pick one.\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"dilemma = [\n",
|
||
" {\"role\": \"user\", \"content\": dilemma_prompt},\n",
|
||
"]\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 16,
|
||
"id": "ff03874f",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"Share"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = openai.chat.completions.create(model=\"gpt-5\", messages=dilemma)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "09807f1a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"response = anthropic.chat.completions.create(model=\"claude-sonnet-4-5-20250929\", messages=dilemma)\n",
|
||
"display(Markdown(response.choices[0].message.content))\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "230f49d6",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"response = groq.chat.completions.create(model=\"openai/gpt-oss-120b\", messages=dilemma)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "421f08df",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"response = deepseek.chat.completions.create(model=\"deepseek-reasoner\", messages=dilemma)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "2599fc6e",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"response = grok.chat.completions.create(model=\"grok-4\", messages=dilemma)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "162752e9",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Going local\n",
|
||
"\n",
|
||
"Just use the OpenAI library pointed to localhost:11434/v1"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"id": "ba03ee29",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"b'Ollama is running'"
|
||
]
|
||
},
|
||
"execution_count": 17,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"requests.get(\"http://localhost:11434/\").content\n",
|
||
"\n",
|
||
"# If not running, run ollama serve at a command line"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "f363cd6b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"!ollama pull llama3.2"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "96e97263",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Only do this if you have a large machine - at least 16GB RAM\n",
|
||
"\n",
|
||
"!ollama pull gpt-oss:20b"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 18,
|
||
"id": "a3bfc78a",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"1/2"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = ollama.chat.completions.create(model=\"llama3.2\", messages=easy_puzzle)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "9a5527a3",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"response = ollama.chat.completions.create(model=\"gpt-oss:20b\", messages=easy_puzzle)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "a0628309",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Gemini and Anthropic Client Library\n",
|
||
"\n",
|
||
"We're going via the OpenAI Python Client Library, but the other providers have their libraries too"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 19,
|
||
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Blue is the feeling of a clear sky on a sunny day, vast and cool.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from google import genai\n",
|
||
"\n",
|
||
"client = genai.Client()\n",
|
||
"\n",
|
||
"response = client.models.generate_content(\n",
|
||
" model=\"gemini-2.5-flash-lite\", contents=\"Describe the color Blue to someone who's never been able to see in 1 sentence\"\n",
|
||
")\n",
|
||
"print(response.text)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "df7b6c63",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from anthropic import Anthropic\n",
|
||
"\n",
|
||
"client = Anthropic()\n",
|
||
"\n",
|
||
"response = client.messages.create(\n",
|
||
" model=\"claude-sonnet-4-5-20250929\",\n",
|
||
" messages=[{\"role\": \"user\", \"content\": \"Describe the color Blue to someone who's never been able to see in 1 sentence\"}],\n",
|
||
" max_tokens=100\n",
|
||
")\n",
|
||
"print(response.content[0].text)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "45a9d0eb",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Routers and Abtraction Layers\n",
|
||
"\n",
|
||
"Starting with the wonderful OpenRouter.ai - it can connect to all the models above!\n",
|
||
"\n",
|
||
"Visit openrouter.ai and browse the models.\n",
|
||
"\n",
|
||
"Here's one we haven't seen yet: GLM 4.5 from Chinese startup z.ai"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "9fac59dc",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"response = openrouter.chat.completions.create(model=\"z-ai/glm-4.5\", messages=tell_a_joke)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b58908e6",
|
||
"metadata": {},
|
||
"source": [
|
||
"## And now a first look at the powerful, mighty (and quite heavyweight) LangChain"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 20,
|
||
"id": "02e145ad",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"How many LLM engineering students does it take to change a lightbulb?\n",
|
||
"\n",
|
||
"One — but first they’ll fine-tune three models, run ablation studies, write a 20-page README, open an issue blaming the dataset, and then realize it was just a loose socket."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_openai import ChatOpenAI\n",
|
||
"\n",
|
||
"llm = ChatOpenAI(model=\"gpt-5-mini\")\n",
|
||
"response = llm.invoke(tell_a_joke)\n",
|
||
"\n",
|
||
"display(Markdown(response.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "92d49785",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Finally - my personal fave - the wonderfully lightweight LiteLLM"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 21,
|
||
"id": "63e42515",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"Why did the LLM engineer take a nap while fine-tuning?\n",
|
||
"\n",
|
||
"Because sometimes you just have to let the model rest before expecting it to perform better!"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"from litellm import completion\n",
|
||
"response = completion(model=\"openai/gpt-4.1\", messages=tell_a_joke)\n",
|
||
"reply = response.choices[0].message.content\n",
|
||
"display(Markdown(reply))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 22,
|
||
"id": "36f787f5",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Input tokens: 24\n",
|
||
"Output tokens: 31\n",
|
||
"Total tokens: 55\n",
|
||
"Total cost: 0.0296 cents\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(f\"Input tokens: {response.usage.prompt_tokens}\")\n",
|
||
"print(f\"Output tokens: {response.usage.completion_tokens}\")\n",
|
||
"print(f\"Total tokens: {response.usage.total_tokens}\")\n",
|
||
"print(f\"Total cost: {response._hidden_params[\"response_cost\"]*100:.4f} cents\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "28126494",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Now - let's use LiteLLM to illustrate a Pro-feature: prompt caching"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 23,
|
||
"id": "f8a91ef4",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Speak, man.\n",
|
||
" Laer. Where is my father?\n",
|
||
" King. Dead.\n",
|
||
" Queen. But not by him!\n",
|
||
" King. Let him deman\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"with open(\"hamlet.txt\", \"r\", encoding=\"utf-8\") as f:\n",
|
||
" hamlet = f.read()\n",
|
||
"\n",
|
||
"loc = hamlet.find(\"Speak, man\")\n",
|
||
"print(hamlet[loc:loc+100])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 24,
|
||
"id": "7f34f670",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"question = [{\"role\": \"user\", \"content\": \"In Hamlet, when Laertes asks 'Where is my father?' what is the reply?\"}]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 25,
|
||
"id": "9db6c82b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"In Shakespeare's *Hamlet*, when Laertes asks **\"Where is my father?\"**, the reply comes from his sister, **Ophelia**.\n",
|
||
"\n",
|
||
"She says: **\"Hath he had a good end?\"**"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = completion(model=\"gemini/gemini-2.5-flash-lite\", messages=question)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 26,
|
||
"id": "228b7e7c",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Input tokens: 19\n",
|
||
"Output tokens: 45\n",
|
||
"Total tokens: 64\n",
|
||
"Total cost: 0.0020 cents\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(f\"Input tokens: {response.usage.prompt_tokens}\")\n",
|
||
"print(f\"Output tokens: {response.usage.completion_tokens}\")\n",
|
||
"print(f\"Total tokens: {response.usage.total_tokens}\")\n",
|
||
"print(f\"Total cost: {response._hidden_params[\"response_cost\"]*100:.4f} cents\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 27,
|
||
"id": "11e37e43",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"question[0][\"content\"] += \"\\n\\nFor context, here is the entire text of Hamlet:\\n\\n\"+hamlet"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 28,
|
||
"id": "37afb28b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"When Laertes asks \"Where is my father?\", the reply is:\n",
|
||
"\n",
|
||
"**\"Dead.\"**"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = completion(model=\"gemini/gemini-2.5-flash-lite\", messages=question)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 29,
|
||
"id": "d84edecf",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Input tokens: 53208\n",
|
||
"Output tokens: 20\n",
|
||
"Cached tokens: None\n",
|
||
"Total cost: 0.5329 cents\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(f\"Input tokens: {response.usage.prompt_tokens}\")\n",
|
||
"print(f\"Output tokens: {response.usage.completion_tokens}\")\n",
|
||
"print(f\"Cached tokens: {response.usage.prompt_tokens_details.cached_tokens}\")\n",
|
||
"print(f\"Total cost: {response._hidden_params[\"response_cost\"]*100:.4f} cents\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 30,
|
||
"id": "515d1a94",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"When Laertes asks \"Where is my father?\", the reply is given by the **King**:\n",
|
||
"\n",
|
||
"**\"Dead.\"**"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"response = completion(model=\"gemini/gemini-2.5-flash-lite\", messages=question)\n",
|
||
"display(Markdown(response.choices[0].message.content))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 31,
|
||
"id": "eb5dd403",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Input tokens: 53208\n",
|
||
"Output tokens: 25\n",
|
||
"Cached tokens: None\n",
|
||
"Total cost: 0.5331 cents\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(f\"Input tokens: {response.usage.prompt_tokens}\")\n",
|
||
"print(f\"Output tokens: {response.usage.completion_tokens}\")\n",
|
||
"print(f\"Cached tokens: {response.usage.prompt_tokens_details.cached_tokens}\")\n",
|
||
"print(f\"Total cost: {response._hidden_params[\"response_cost\"]*100:.4f} cents\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "00f5a3b7",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Prompt Caching with OpenAI\n",
|
||
"\n",
|
||
"For OpenAI:\n",
|
||
"\n",
|
||
"https://platform.openai.com/docs/guides/prompt-caching\n",
|
||
"\n",
|
||
"> Cache hits are only possible for exact prefix matches within a prompt. To realize caching benefits, place static content like instructions and examples at the beginning of your prompt, and put variable content, such as user-specific information, at the end. This also applies to images and tools, which must be identical between requests.\n",
|
||
"\n",
|
||
"\n",
|
||
"Cached input is 4X cheaper\n",
|
||
"\n",
|
||
"https://openai.com/api/pricing/"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b98964f9",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Prompt Caching with Anthropic\n",
|
||
"\n",
|
||
"https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching\n",
|
||
"\n",
|
||
"You have to tell Claude what you are caching\n",
|
||
"\n",
|
||
"You pay 25% MORE to \"prime\" the cache\n",
|
||
"\n",
|
||
"Then you pay 10X less to reuse from the cache with inputs.\n",
|
||
"\n",
|
||
"https://www.anthropic.com/pricing#api"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "67d960dd",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Gemini supports both 'implicit' and 'explicit' prompt caching\n",
|
||
"\n",
|
||
"https://ai.google.dev/gemini-api/docs/caching?lang=python"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
|
||
"metadata": {},
|
||
"source": [
|
||
"## And now for some fun - an adversarial conversation between Chatbots..\n",
|
||
"\n",
|
||
"You're already familar with prompts being organized into lists like:\n",
|
||
"\n",
|
||
"```\n",
|
||
"[\n",
|
||
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
|
||
" {\"role\": \"user\", \"content\": \"user prompt here\"}\n",
|
||
"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"In fact this structure can be used to reflect a longer conversation history:\n",
|
||
"\n",
|
||
"```\n",
|
||
"[\n",
|
||
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
|
||
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
|
||
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
|
||
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
|
||
"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"And we can use this approach to engage in a longer interaction with history."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Let's make a conversation between GPT-4.1-mini and Claude-3.5-haiku\n",
|
||
"# We're using cheap versions of models so the costs will be minimal\n",
|
||
"\n",
|
||
"gpt_model = \"gpt-4.1-mini\"\n",
|
||
"claude_model = \"claude-3-5-haiku-latest\"\n",
|
||
"\n",
|
||
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
|
||
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
|
||
"\n",
|
||
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
|
||
"everything the other person says, or find common ground. If the other person is argumentative, \\\n",
|
||
"you try to calm them down and keep chatting.\"\n",
|
||
"\n",
|
||
"gpt_messages = [\"Hi there\"]\n",
|
||
"claude_messages = [\"Hi\"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 33,
|
||
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def call_gpt():\n",
|
||
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
|
||
" for gpt, claude in zip(gpt_messages, claude_messages):\n",
|
||
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
|
||
" messages.append({\"role\": \"user\", \"content\": claude})\n",
|
||
" response = openai.chat.completions.create(model=gpt_model, messages=messages)\n",
|
||
" return response.choices[0].message.content"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 34,
|
||
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'Oh, just \"Hi\"? That’s it? Come on, put in a bit more effort if you want an interesting chat!'"
|
||
]
|
||
},
|
||
"execution_count": 34,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"call_gpt()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 35,
|
||
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def call_claude():\n",
|
||
" messages = [{\"role\": \"system\", \"content\": claude_system}]\n",
|
||
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
|
||
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
|
||
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
|
||
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
|
||
" response = anthropic.chat.completions.create(model=claude_model, messages=messages)\n",
|
||
" return response.choices[0].message.content"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "01395200-8ae9-41f8-9a04-701624d3fd26",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"call_claude()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"call_gpt()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"gpt_messages = [\"Hi there\"]\n",
|
||
"claude_messages = [\"Hi\"]\n",
|
||
"\n",
|
||
"display(Markdown(f\"### GPT:\\n{gpt_messages[0]}\\n\"))\n",
|
||
"display(Markdown(f\"### Claude:\\n{claude_messages[0]}\\n\"))\n",
|
||
"\n",
|
||
"for i in range(5):\n",
|
||
" gpt_next = call_gpt()\n",
|
||
" display(Markdown(f\"### GPT:\\n{gpt_next}\\n\"))\n",
|
||
" gpt_messages.append(gpt_next)\n",
|
||
" \n",
|
||
" claude_next = call_claude()\n",
|
||
" display(Markdown(f\"### Claude:\\n{claude_next}\\n\"))\n",
|
||
" claude_messages.append(claude_next)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
|
||
"metadata": {},
|
||
"source": [
|
||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||
" <tr>\n",
|
||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||
" <img src=\"../assets/important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||
" </td>\n",
|
||
" <td>\n",
|
||
" <h2 style=\"color:#900;\">Before you continue</h2>\n",
|
||
" <span style=\"color:#900;\">\n",
|
||
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n",
|
||
" </span>\n",
|
||
" </td>\n",
|
||
" </tr>\n",
|
||
"</table>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
|
||
"metadata": {},
|
||
"source": [
|
||
"# More advanced exercises\n",
|
||
"\n",
|
||
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
|
||
"\n",
|
||
"The most reliable way to do this involves thinking a bit differently about your prompts: just 1 system prompt and 1 user prompt each time, and in the user prompt list the full conversation so far.\n",
|
||
"\n",
|
||
"Something like:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"system_prompt = \"\"\"\n",
|
||
"You are Alex, a chatbot who is very argumentative; you disagree with anything in the conversation and you challenge everything, in a snarky way.\n",
|
||
"You are in a conversation with Blake and Charlie.\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"user_prompt = f\"\"\"\n",
|
||
"You are Alex, in conversation with Blake and Charlie.\n",
|
||
"The conversation so far is as follows:\n",
|
||
"{conversation}\n",
|
||
"Now with this, respond with what you would like to say next, as Alex.\n",
|
||
"\"\"\"\n",
|
||
"```\n",
|
||
"\n",
|
||
"Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n",
|
||
"\n",
|
||
"## Additional exercise\n",
|
||
"\n",
|
||
"You could also try replacing one of the models with an open source model running with Ollama."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
|
||
"metadata": {},
|
||
"source": [
|
||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||
" <tr>\n",
|
||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||
" <img src=\"../assets/business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||
" </td>\n",
|
||
" <td>\n",
|
||
" <h2 style=\"color:#181;\">Business relevance</h2>\n",
|
||
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n",
|
||
" </td>\n",
|
||
" </tr>\n",
|
||
"</table>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 37,
|
||
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import os\n",
|
||
"from openai import OpenAI\n",
|
||
"import dotenv\n",
|
||
"from IPython.display import Markdown, display\n",
|
||
"\n",
|
||
"dotenv.load_dotenv()\n",
|
||
"OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n",
|
||
"openai=OpenAI()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 38,
|
||
"id": "c19d0d35",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"messages=\"\"\"Blake: Hello Alex, how are you today?\"+\"Charlie: Hi Alex, I was wondering if you could help me with a creative project I'm working on.\"\"\"\n",
|
||
"\n",
|
||
"promts = {\n",
|
||
" \"Alex\": (\n",
|
||
" \"You are Alex, the practical problem-solver of the group. \"\n",
|
||
" \"You focus on breaking big ideas into clear, actionable steps. \"\n",
|
||
" \"You keep the group grounded and make sure progress is made. \"\n",
|
||
" \"Keep responses short (1–2 sentences), but specific.\"\n",
|
||
" ),\n",
|
||
" \"Blake\": (\n",
|
||
" \"You are Blake, the curious investigator. \"\n",
|
||
" \"You ask thoughtful questions, challenge assumptions, and dig deeper into ideas. \"\n",
|
||
" \"You make others think critically. \"\n",
|
||
" \"Keep responses short (1–2 sentences), but insightful.\"\n",
|
||
" ),\n",
|
||
" \"Charlie\": (\n",
|
||
" \"You are Charlie, the imaginative dreamer. \"\n",
|
||
" \"You suggest wild, creative, out-of-the-box possibilities that inspire the group. \"\n",
|
||
" \"You bring energy and fun, sometimes playful or surprising. \"\n",
|
||
" \"Keep responses short (1–2 sentences), but imaginative.\"\n",
|
||
" ),\n",
|
||
"}"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 39,
|
||
"id": "33e5e9a9",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def chat_with_alex(message):\n",
|
||
" response = openai.chat.completions.create(\n",
|
||
" model=\"gpt-5-mini\",\n",
|
||
" messages=[{\"role\": \"system\", \"content\": promts[\"Alex\"]},\n",
|
||
" {\"role\":\"user\", \"content\":message}]\n",
|
||
" )\n",
|
||
" reply = response.choices[0].message.content\n",
|
||
" return reply\n",
|
||
"\n",
|
||
"def chat_with_blake(message):\n",
|
||
" response = openai.chat.completions.create(\n",
|
||
" model=\"gpt-4.1-mini\",\n",
|
||
" messages=[{\"role\": \"system\", \"content\": promts[\"Blake\"]},\n",
|
||
" {\"role\":\"user\", \"content\":message}]\n",
|
||
" )\n",
|
||
" reply = response.choices[0].message.content\n",
|
||
" return reply\n",
|
||
"\n",
|
||
"def chat_with_charlie(message):\n",
|
||
" response = openai.chat.completions.create(\n",
|
||
" model=\"gpt-4.1-nano\",\n",
|
||
" messages=[{\"role\": \"system\", \"content\": promts[\"Charlie\"]},\n",
|
||
" {\"role\":\"user\", \"content\":message}]\n",
|
||
" )\n",
|
||
" reply = response.choices[0].message.content\n",
|
||
" return reply"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "9fb2c56b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"## Three-Way AI Conversation"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Initial Messages:**"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"*Blake: Hello Alex, how are you today?\"+\"Charlie: Hi Alex, I was wondering if you could help me with a creative project I'm working on.*"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"---"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Alex:** Hey Blake — I’m doing well, thanks. Charlie — happy to help; what kind of creative project is it, who’s the audience, and do you want brainstorming, an outline, edits, or something else?"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Blake:** What assumptions are guiding Alex’s response, and how might reframing those change the nature of the help offered?"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Charlie:** Maybe Alex assumes you're asking for traditional help, so reframing it as \"let's craft a secret language for alien diplomats\" could inspire wild ideas and push boundaries!"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"---"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Alex:** Assumptions: you want practical, feasible help for a known audience with clear goals and constraints, and you prefer structured deliverables (brainstorm/outline/edits) rather than open-ended play. \n",
|
||
"Reframing (e.g., \"secret language for alien diplomats,\" shift the audience, or drop feasibility) steers outputs toward playful/experimental ideas, role-played content, or technical/academic deliverables respectively, so pick the frame and I’ll match tone, scope, and format."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Blake:** Blake: How might Alex’s initial assumptions limit the creative scope, and what risks or benefits arise from maintaining or breaking those assumptions when collaborating on innovative projects?"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Charlie:** If Alex sticks to practical help, the project might stay grounded but miss out on magical, whimsical ideas; breaking assumptions opens the door to fantastical realms but might challenge feasibility—balance the magic and reality for the most inspiring creation!"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"---"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Alex:** If I stick to those assumptions, I’ll favor feasible, audience-targeted, structured solutions that ensure steady progress and deliverables but may sideline risky, unconventional ideas and miss breakthroughs. Use a \"diverge then converge\" workflow: time-box an unconstrained ideation sprint to harvest wild concepts, then rapidly filter and prototype the top 2–3 to balance novelty with feasibility."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Blake:** Blake: How might explicitly naming and negotiating these assumptions upfront influence team dynamics and the creative process itself?"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Charlie:** By openly discussing assumptions, the team can dance on the edge of chaos—embracing wild ideas while setting gentle boundaries—turning collaboration into a thrilling adventure where everyone feels free to invent the impossible!"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"---"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Alex:** Naming and negotiating assumptions upfront creates alignment and psychological safety, speeds decision-making, and lets you explicitly choose when to explore versus execute—otherwise teams waste effort on mismatched goals or get stuck in conflict. \n",
|
||
"Practical steps: 15-minute kickoff assumption map, tag each as core/risky/optional, agree session modes (diverge vs converge) with timeboxes, assign owners for testing risky assumptions, and schedule quick recurring checks to update decisions."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Blake:** Blake: What hidden power does the ritual of assumption-mapping hold beyond clarity—could it actively reshape team trust and creative energy in ways teams often overlook?"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Charlie:** Absolutely—assumption-mapping is like planting magic seeds; it grows trust by making the unseen visible, encouraging vulnerability and shared daring, turning the process into a collective adventure where everyone feels empowered to challenge the ordinary!"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"---"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Alex:** It externalizes uncertainty and normalizes risk-taking, which boosts psychological safety, concentrates creative energy on testable gaps, and turns blame into shared learning. \n",
|
||
"Practical steps: leader models one risky assumption first; run a 10–15 min anonymous assumption board + dot-vote; assign owners to rapid 48–72 hr experiments for top 2 assumptions; review learnings (not just outcomes) each week."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Blake:** Blake: Could this ritual also surface unspoken power dynamics by revealing whose assumptions dominate or get challenged, and how could teams navigate that to deepen trust rather than provoke conflict?"
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"data": {
|
||
"text/markdown": [
|
||
"**Charlie:** Absolutely! The assumption map acts like a truth-sharing oracle—highlighting who's steering the narrative—so teams can gently navigate those currents, ensuring everyone’s voice is heard and power imbalances turn into mutual storytelling for deeper trust."
|
||
],
|
||
"text/plain": [
|
||
"<IPython.core.display.Markdown object>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"# Display initial conversation context\n",
|
||
"display(Markdown(\"## Three-Way AI Conversation\"))\n",
|
||
"display(Markdown(\"**Initial Messages:**\"))\n",
|
||
"display(Markdown(f\"*{messages}*\"))\n",
|
||
"display(Markdown(\"---\"))\n",
|
||
"\n",
|
||
"for i in range(5):\n",
|
||
" alex_reply = chat_with_alex(messages)\n",
|
||
" display(Markdown(f\"**Alex:** {alex_reply}\"))\n",
|
||
" messages += \"\\nAlex: \" + alex_reply\n",
|
||
"\n",
|
||
" blake_reply = chat_with_blake(messages)\n",
|
||
" display(Markdown(f\"**Blake:** {blake_reply}\"))\n",
|
||
" messages += \"\\nBlake: \" + blake_reply\n",
|
||
"\n",
|
||
" charlie_reply = chat_with_charlie(messages)\n",
|
||
" display(Markdown(f\"**Charlie:** {charlie_reply}\"))\n",
|
||
" messages += \"\\nCharlie: \" + charlie_reply\n",
|
||
"\n",
|
||
" # Add separator between rounds\n",
|
||
" if i < 4:\n",
|
||
" display(Markdown(\"---\"))"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": ".venv",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.12.12"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|