Files
LLM_Engineering_OLD/week1/day4.ipynb

304 lines
7.6 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "d9e61417",
"metadata": {},
"source": [
"# Day 4\n",
"\n",
"## Tokenizing with code"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7dc1c1d9",
"metadata": {},
"outputs": [],
"source": [
"import tiktoken\n",
"\n",
"encoding = tiktoken.encoding_for_model(\"gpt-4.1-mini\")\n",
"\n",
"tokens = encoding.encode(\"Hi my name is Ed and I like banoffee pie\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7632966c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[12194, 922, 1308, 382, 6117, 326, 357, 1299, 9171, 26458, 5148]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tokens"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cce0c188",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"12194 = Hi\n",
"922 = my\n",
"1308 = name\n",
"382 = is\n",
"6117 = Ed\n",
"326 = and\n",
"357 = I\n",
"1299 = like\n",
"9171 = ban\n",
"26458 = offee\n",
"5148 = pie\n"
]
}
],
"source": [
"for token_id in tokens:\n",
" token_text = encoding.decode([token_id])\n",
" print(f\"{token_id} = {token_text}\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "98e3bbd2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' and'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"encoding.decode([326])"
]
},
{
"cell_type": "markdown",
"id": "538efe61",
"metadata": {},
"source": [
"# And another topic!\n",
"\n",
"### The Illusion of \"memory\"\n",
"\n",
"Many of you will know this already. But for those that don't -- this might be an \"AHA\" moment!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83a4b3eb",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")"
]
},
{
"cell_type": "markdown",
"id": "b618859b",
"metadata": {},
"source": [
"### You should be very comfortable with what the next cell is doing!\n",
"\n",
"_I'm creating a new instance of the OpenAI Python Client library, a lightweight wrapper around making HTTP calls to an endpoint for calling the GPT LLM, or other LLM providers_"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b959be3b",
"metadata": {},
"outputs": [],
"source": [
"from openai import OpenAI\n",
"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "markdown",
"id": "aa889e80",
"metadata": {},
"source": [
"### A message to OpenAI is a list of dicts"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "97298fea",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n",
" {\"role\": \"user\", \"content\": \"Hi! I'm Ed!\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3475a36d",
"metadata": {},
"outputs": [],
"source": [
"response = openai.chat.completions.create(model=\"gpt-4.1-mini\", messages=messages)\n",
"response.choices[0].message.content"
]
},
{
"cell_type": "markdown",
"id": "a5f45ed8",
"metadata": {},
"source": [
"### OK let's now ask a follow-up question"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6bce2208",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What's my name?\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "404462f5",
"metadata": {},
"outputs": [],
"source": [
"response = openai.chat.completions.create(model=\"gpt-4.1-mini\", messages=messages)\n",
"response.choices[0].message.content"
]
},
{
"cell_type": "markdown",
"id": "098237ef",
"metadata": {},
"source": [
"### Wait, wha??\n",
"\n",
"We just told you!\n",
"\n",
"What's going on??\n",
"\n",
"Here's the thing: every call to an LLM is completely STATELESS. It's a totally new call, every single time. As AI engineers, it's OUR JOB to devise techniques to give the impression that the LLM has a \"memory\"."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b6d43f92",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n",
" {\"role\": \"user\", \"content\": \"Hi! I'm Ed!\"},\n",
" {\"role\": \"assistant\", \"content\": \"Hi Ed! How can I assist you today?\"},\n",
" {\"role\": \"user\", \"content\": \"What's my name?\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7ac742c",
"metadata": {},
"outputs": [],
"source": [
"response = openai.chat.completions.create(model=\"gpt-4.1-mini\", messages=messages)\n",
"response.choices[0].message.content"
]
},
{
"cell_type": "markdown",
"id": "96c49557",
"metadata": {},
"source": [
"## To recap\n",
"\n",
"With apologies if this is obvious to you - but it's still good to reinforce:\n",
"\n",
"1. Every call to an LLM is stateless\n",
"2. We pass in the entire conversation so far in the input prompt, every time\n",
"3. This gives the illusion that the LLM has memory - it apparently keeps the context of the conversation\n",
"4. But this is a trick; it's a by-product of providing the entire conversation, every time\n",
"5. An LLM just predicts the most likely next tokens in the sequence; if that sequence contains \"My name is Ed\" and later \"What's my name?\" then it will predict.. Ed!\n",
"\n",
"The ChatGPT product uses exactly this trick - every time you send a message, it's the entire conversation that gets passed in.\n",
"\n",
"\"Does that mean we have to pay extra each time for all the conversation so far\"\n",
"\n",
"For sure it does. And that's what we WANT. We want the LLM to predict the next tokens in the sequence, looking back on the entire conversation. We want that compute to happen, so we need to pay the electricity bill for it!\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}