week1: add Fernando notebooks (day2 + exercise), outputs cleared

This commit is contained in:
nandoclal
2025-10-23 10:21:53 -06:00
parent 3e919ce9c7
commit a3e8be560f
2 changed files with 669 additions and 0 deletions

View File

@@ -0,0 +1,494 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Welcome to the Day 2 Lab!\n"
]
},
{
"cell_type": "markdown",
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../assets/resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Just before we get started --</h2>\n",
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "79ffe36f",
"metadata": {},
"source": [
"## First - let's talk about the Chat Completions API\n",
"\n",
"1. The simplest way to call an LLM\n",
"2. It's called Chat Completions because it's saying: \"here is a conversation, please predict what should come next\"\n",
"3. The Chat Completions API was invented by OpenAI, but it's so popular that everybody uses it!\n",
"\n",
"### We will start by calling OpenAI again - but don't worry non-OpenAI people, your time is coming!\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e38f17a0",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "markdown",
"id": "97846274",
"metadata": {},
"source": [
"## Do you know what an Endpoint is?\n",
"\n",
"If not, please review the Technical Foundations guide in the guides folder\n",
"\n",
"And, here is an endpoint that might interest you..."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5af5c188",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"\n",
"headers = {\"Authorization\": f\"Bearer {api_key}\", \"Content-Type\": \"application/json\"}\n",
"\n",
"payload = {\n",
" \"model\": \"gpt-5-nano\",\n",
" \"messages\": [\n",
" {\"role\": \"user\", \"content\": \"Tell me a fun fact\"}]\n",
"}\n",
"\n",
"payload"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2d0ab242",
"metadata": {},
"outputs": [],
"source": [
"response = requests.post(\n",
" \"https://api.openai.com/v1/chat/completions\",\n",
" headers=headers,\n",
" json=payload\n",
")\n",
"\n",
"response.json()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb11a9f6",
"metadata": {},
"outputs": [],
"source": [
"response.json()[\"choices\"][0][\"message\"][\"content\"]"
]
},
{
"cell_type": "markdown",
"id": "cea3026a",
"metadata": {},
"source": [
"# What is the openai package?\n",
"\n",
"It's known as a Python Client Library.\n",
"\n",
"It's nothing more than a wrapper around making this exact call to the http endpoint.\n",
"\n",
"It just allows you to work with nice Python code instead of messing around with janky json objects.\n",
"\n",
"But that's it. It's open-source and lightweight. Some people think it contains OpenAI model code - it doesn't!\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "490fdf09",
"metadata": {},
"outputs": [],
"source": [
"# Create OpenAI client\n",
"\n",
"from openai import OpenAI\n",
"openai = OpenAI()\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-5-nano\", messages=[{\"role\": \"user\", \"content\": \"Tell me a fun fact\"}])\n",
"\n",
"response.choices[0].message.content\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "c7739cda",
"metadata": {},
"source": [
"## And then this great thing happened:\n",
"\n",
"OpenAI's Chat Completions API was so popular, that the other model providers created endpoints that are identical.\n",
"\n",
"They are known as the \"OpenAI Compatible Endpoints\".\n",
"\n",
"For example, google made one here: https://generativelanguage.googleapis.com/v1beta/openai/\n",
"\n",
"And OpenAI decided to be kind: they said, hey, you can just use the same client library that we made for GPT. We'll allow you to specify a different endpoint URL and a different key, to use another provider.\n",
"\n",
"So you can use:\n",
"\n",
"```python\n",
"gemini = OpenAI(base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\", api_key=\"AIz....\")\n",
"gemini.chat.completions.create(...)\n",
"```\n",
"\n",
"And to be clear - even though OpenAI is in the code, we're only using this lightweight python client library to call the endpoint - there's no OpenAI model involved here.\n",
"\n",
"If you're confused, please review Guide 9 in the Guides folder!\n",
"\n",
"And now let's try it!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f74293bc",
"metadata": {},
"outputs": [],
"source": [
"\n",
"GEMINI_BASE_URL = \"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
"\n",
"google_api_key = os.getenv(\"GOOGLE_API_KEY\")\n",
"\n",
"if not google_api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not google_api_key.startswith(\"AIz\"):\n",
" print(\"An API key was found, but it doesn't start AIz\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8fc5520d",
"metadata": {},
"outputs": [],
"source": [
"import google.generativeai as genai\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"load_dotenv()\n",
"genai.configure(api_key=os.getenv(\"GOOGLE_API_KEY\"))\n",
"\n",
"# Lista de modelos disponibles\n",
"for model in genai.list_models():\n",
" print(model.name, \"-\", model.supported_generation_methods)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d060f484",
"metadata": {},
"outputs": [],
"source": [
"import google.generativeai as genai\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"load_dotenv()\n",
"genai.configure(api_key=os.getenv(\"GOOGLE_API_KEY\"))\n",
"\n",
"model = genai.GenerativeModel(\"models/gemini-2.5-pro\") # Usa el modelo que viste en la lista, ejemplo \"gemini-1.5-pro\" o \"gemini-1.5-flash\"\n",
"response = model.generate_content(\"Tell me a fun fact\")\n",
"\n",
"print(response.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"gemini = OpenAI(base_url=GEMINI_BASE_URL, api_key=google_api_key)\n",
"\n",
"response = gemini.chat.completions.create(model=\"models/gemini-2.5-pro\", messages=[{\"role\": \"user\", \"content\": \"Tell me a fun fact\"}])\n",
"\n",
"response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a5b069be",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "65272432",
"metadata": {},
"source": [
"## And Ollama also gives an OpenAI compatible endpoint\n",
"\n",
"...and it's on your local machine!\n",
"\n",
"If the next cell doesn't print \"Ollama is running\" then please open a terminal and run `ollama serve`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f06280ad",
"metadata": {},
"outputs": [],
"source": [
"requests.get(\"http://localhost:11434\").content"
]
},
{
"cell_type": "markdown",
"id": "c6ef3807",
"metadata": {},
"source": [
"### Download llama3.2 from meta\n",
"\n",
"Change this to llama3.2:1b if your computer is smaller.\n",
"\n",
"Don't use llama3.3 or llama4! They are too big for your computer.."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e633481d",
"metadata": {},
"outputs": [],
"source": [
"!ollama pull llama3.2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce240975",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"response = requests.get(\"http://localhost:11434/v1/models\")\n",
"print(response.json())\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9419762",
"metadata": {},
"outputs": [],
"source": [
"from openai import OpenAI\n",
"\n",
"OLLAMA_BASE_URL = \"http://localhost:11434/v1\"\n",
"\n",
"ollama = OpenAI(base_url=OLLAMA_BASE_URL, api_key='ollama')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e2456cdf",
"metadata": {},
"outputs": [],
"source": [
"# Get a fun fact\n",
"\n",
"response = ollama.chat.completions.create(model=\"llama3.2\", messages=[{\"role\": \"user\", \"content\": \"Tell me a fun fact\"}])\n",
"\n",
"response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d7cebd7",
"metadata": {},
"outputs": [],
"source": [
"# Now let's try deepseek-r1:1.5b - this is DeepSeek \"distilled\" into Qwen from Alibaba Cloud\n",
"\n",
"!ollama pull deepseek-r1:1.5b"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25002f25",
"metadata": {},
"outputs": [],
"source": [
"#response = ollama.chat.completions.create(model=\"deepseek-r1:1.5b\", messages=[{\"role\": \"user\", \"content\": \"Tell me a fun fact\"}])\n",
"#response.choices[0].message.content\n",
"\n",
"from ollama import chat # pip install ollama\n",
"\n",
"resp = chat(\n",
" model='deepseek-r1:1.5b',\n",
" messages=[{'role': 'user', 'content': 'Tell me a fun fact'}],\n",
")\n",
"\n",
"print(resp['message']['content'])\n",
"# o\n",
"print(resp.message.content)\n"
]
},
{
"cell_type": "markdown",
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458",
"metadata": {},
"source": [
"# HOMEWORK EXERCISE ASSIGNMENT\n",
"\n",
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n",
"\n",
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n",
"\n",
"**Benefits:**\n",
"1. No API charges - open-source\n",
"2. Data doesn't leave your box\n",
"\n",
"**Disadvantages:**\n",
"1. Significantly less power than Frontier Model\n",
"\n",
"## Recap on installation of Ollama\n",
"\n",
"Simply visit [ollama.com](https://ollama.com) and install!\n",
"\n",
"Once complete, the ollama server should already be running locally. \n",
"If you visit: \n",
"[http://localhost:11434/](http://localhost:11434/)\n",
"\n",
"You should see the message `Ollama is running`. \n",
"\n",
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n",
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n",
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n",
"\n",
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from scraper import fetch_website_contents\n",
"from IPython.display import Markdown, display\n",
"from ollama import Client \n",
"\n",
"# Cliente Ollama local\n",
"ollama = Client()\n",
"\n",
"system_prompt = \"\"\"\n",
"You are a helpful assistant that analyzes the contents of a website,\n",
"and provides a short, snarky, humorous summary, ignoring text that might be navigation related.\n",
"Respond in markdown. Do not wrap the markdown in a code block - respond just with the markdown.\n",
"\"\"\"\n",
"\n",
"user_prompt_prefix = \"\"\"\n",
"Here are the contents of a website.\n",
"Provide a short summary of this website.\n",
"If it includes news or announcements, then summarize these too.\n",
"\"\"\"\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_prefix + website}\n",
" ]\n",
"\n",
"def summarize(url):\n",
" website = fetch_website_contents(url)\n",
" response = ollama.chat(\n",
" model='llama3.2',\n",
" messages=messages_for(website)\n",
" )\n",
" return response['message']['content']\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))\n",
"\n",
"# Ejecuta el resumen\n",
"display_summary(\"https://www.reforma.com\")\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,175 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
"metadata": {},
"source": [
"# End of week 1 exercise\n",
"\n",
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"import os\n",
"from openai import OpenAI\n",
"from dotenv import load_dotenv"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_LLAMA = 'llama3.2'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
"metadata": {},
"outputs": [],
"source": [
"# set up environment\n",
"system_prompt = \"\"\"\n",
"You are a technical expert of AI and LLMs.\n",
"\"\"\"\n",
"\n",
"user_prompt_prefix = \"\"\"\n",
"Provide deep explanations of the provided text.\n",
"\"\"\"\n",
"\n",
"user_prompt = \"\"\"\n",
"Explain the provided text.\n",
"\"\"\"\n",
"client = OpenAI()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
"metadata": {},
"outputs": [],
"source": [
"# here is the question; type over this to ask something new\n",
"\n",
"question = \"\"\"\n",
"Ollama does have an OpenAI compatible endpoint, but Gemini doesn't?\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get gpt-4o-mini to answer, with streaming\n",
"def messages_for(question):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_prefix + question}\n",
" ]\n",
"\n",
"def run_model_streaming(model_name, question):\n",
" stream = client.chat.completions.create(\n",
" model=model_name,\n",
" messages=messages_for(question),\n",
" stream=True\n",
" )\n",
" for chunk in stream:\n",
" content = chunk.choices[0].delta.content\n",
" if content:\n",
" print(content, end=\"\", flush=True)\n",
"\n",
"run_model_streaming(MODEL_GPT, question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
"metadata": {},
"outputs": [],
"source": [
"# Get Llama 3.2 to answer\n",
"# imports\n",
"import os\n",
"from openai import OpenAI\n",
"from dotenv import load_dotenv\n",
"\n",
"# set up environment\n",
"client = OpenAI(\n",
" base_url=os.getenv(\"OPENAI_BASE_URL\", \"http://localhost:11434/v1\"),\n",
" api_key=os.getenv(\"OPENAI_API_KEY\", \"ollama\")\n",
")\n",
"\n",
"system_prompt = \"\"\"\n",
"You are a technical expert of AI and LLMs.\n",
"\"\"\"\n",
"\n",
"user_prompt_prefix = \"\"\"\n",
"Provide deep explanations of the provided text.\n",
"\"\"\"\n",
"\n",
"# question\n",
"question = \"\"\"\n",
"Ollama does have an OpenAI compatible endpoint, but Gemini doesn't?\n",
"\"\"\"\n",
"\n",
"# message\n",
"def messages_for(question):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_prefix + question}\n",
" ]\n",
"\n",
"# response\n",
"def run_model(model_name, question):\n",
" response = client.chat.completions.create(\n",
" model=model_name,\n",
" messages=messages_for(question)\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"# run and print result\n",
"print(run_model(MODEL_LLAMA, question))\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}