Merge pull request #417 from OpalDecisionSciences/week1-pr-community-contributions-jeannine-jordan
PR to Add Week 1 Community Contributions
This commit is contained in:
@@ -0,0 +1,794 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# YOUR FIRST LAB\n",
|
||||
"### Please read this section. This is valuable to get you prepared, even if it's a long read -- it's important stuff.\n",
|
||||
"\n",
|
||||
"## Your first Frontier LLM Project\n",
|
||||
"\n",
|
||||
"Let's build a useful LLM solution - in a matter of minutes.\n",
|
||||
"\n",
|
||||
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
|
||||
"\n",
|
||||
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
|
||||
"\n",
|
||||
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
|
||||
"\n",
|
||||
"## If you're new to Jupyter Lab\n",
|
||||
"\n",
|
||||
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
|
||||
"\n",
|
||||
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
|
||||
"\n",
|
||||
"## If you're new to the Command Line\n",
|
||||
"\n",
|
||||
"Please see these excellent guides: [Command line on PC](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) and [Command line on Mac](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b). \n",
|
||||
"\n",
|
||||
"## If you'd prefer to work in IDEs\n",
|
||||
"\n",
|
||||
"If you're more comfortable in IDEs like VSCode, Cursor or PyCharm, they both work great with these lab notebooks too. \n",
|
||||
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n",
|
||||
"\n",
|
||||
"## If you'd like to brush up your Python\n",
|
||||
"\n",
|
||||
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n",
|
||||
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n",
|
||||
"\n",
|
||||
"## I am here to help\n",
|
||||
"\n",
|
||||
"If you have any problems at all, please do reach out. \n",
|
||||
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!) \n",
|
||||
"And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done 😂 \n",
|
||||
"\n",
|
||||
"## More troubleshooting\n",
|
||||
"\n",
|
||||
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n",
|
||||
"\n",
|
||||
"## For foundational technical knowledge (eg Git, APIs, debugging) \n",
|
||||
"\n",
|
||||
"If you're relatively new to programming -- I've got your back! While it's ideal to have some programming experience for this course, there's only one mandatory prerequisite: plenty of patience. 😁 I've put together a set of self-study guides that cover Git and GitHub, APIs and endpoints, beginner python and more.\n",
|
||||
"\n",
|
||||
"This covers Git and GitHub; what they are, the difference, and how to use them: \n",
|
||||
"https://github.com/ed-donner/agents/blob/main/guides/03_git_and_github.ipynb\n",
|
||||
"\n",
|
||||
"This covers technical foundations: \n",
|
||||
"ChatGPT vs API; taking screenshots; Environment Variables; Networking basics; APIs and endpoints: \n",
|
||||
"https://github.com/ed-donner/agents/blob/main/guides/04_technical_foundations.ipynb\n",
|
||||
"\n",
|
||||
"This covers Python for beginners, and making sure that a `NameError` never trips you up: \n",
|
||||
"https://github.com/ed-donner/agents/blob/main/guides/06_python_foundations.ipynb\n",
|
||||
"\n",
|
||||
"This covers the essential techniques for figuring out errors: \n",
|
||||
"https://github.com/ed-donner/agents/blob/main/guides/08_debugging.ipynb\n",
|
||||
"\n",
|
||||
"And you'll find other useful guides in the same folder in GitHub. Some information applies to my other Udemy course (eg Async Python) but most of it is very relevant for LLM engineering.\n",
|
||||
"\n",
|
||||
"## If this is old hat!\n",
|
||||
"\n",
|
||||
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress. Ultimately we will fine-tune our own LLM to compete with OpenAI!\n",
|
||||
"\n",
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
|
||||
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>\n",
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#f71;\">This code is a live resource - keep an eye out for my emails</h2>\n",
|
||||
" <span style=\"color:#f71;\">I push updates to the code regularly. As people ask questions, I add more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but I've also added better explanations and new models like DeepSeek. Consider this like an interactive book.<br/><br/>\n",
|
||||
" I try to send emails regularly with important updates related to the course. You can find this in the 'Announcements' section of Udemy in the left sidebar. You can also choose to receive my emails via your Notification Settings in Udemy. I'm respectful of your inbox and always try to add value with my emails!\n",
|
||||
" </span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>\n",
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
|
||||
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# imports\n",
|
||||
"\n",
|
||||
"import os\n",
|
||||
"import requests\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"from bs4 import BeautifulSoup\n",
|
||||
"from IPython.display import Markdown, display\n",
|
||||
"from openai import OpenAI\n",
|
||||
"from selenium import webdriver\n",
|
||||
"from selenium.webdriver.chrome.options import Options\n",
|
||||
"from selenium.webdriver.chrome.service import Service\n",
|
||||
"from webdriver_manager.chrome import ChromeDriverManager\n",
|
||||
"import time\n",
|
||||
"import random\n",
|
||||
"from urllib import robotparser\n",
|
||||
"from urllib.parse import urlparse\n",
|
||||
"\n",
|
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Connecting to OpenAI (or Ollama)\n",
|
||||
"\n",
|
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI. \n",
|
||||
"\n",
|
||||
"If you'd like to use free Ollama instead, please see the README section \"Free Alternative to Paid APIs\", and if you're not sure how to do this, there's a full solution in the solutions folder (day1_with_ollama.ipynb).\n",
|
||||
"\n",
|
||||
"## Troubleshooting if you have problems:\n",
|
||||
"\n",
|
||||
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
|
||||
"\n",
|
||||
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
|
||||
"\n",
|
||||
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
|
||||
"\n",
|
||||
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load environment variables in a file called .env\n",
|
||||
"\n",
|
||||
"load_dotenv(override=True)\n",
|
||||
"api_key = os.getenv('OPENAI_API_KEY')\n",
|
||||
"\n",
|
||||
"# Check the key\n",
|
||||
"\n",
|
||||
"if not api_key:\n",
|
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
|
||||
"elif not api_key.startswith(\"sk-proj-\"):\n",
|
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
|
||||
"elif api_key.strip() != api_key:\n",
|
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
|
||||
"else:\n",
|
||||
" print(\"API key found and looks good so far!\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai = OpenAI()\n",
|
||||
"\n",
|
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
|
||||
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Let's make a quick call to a Frontier model to get started, as a preview!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n",
|
||||
"\n",
|
||||
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
|
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
|
||||
"print(response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## OK onwards with our first project"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# A class to represent a Webpage\n",
|
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
|
||||
"\n",
|
||||
"# Some websites need you to use proper headers when fetching them:\n",
|
||||
"headers = {\n",
|
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"class Website:\n",
|
||||
"\n",
|
||||
" def __init__(self, url):\n",
|
||||
" \"\"\"\n",
|
||||
" Create this Website object from the given url using the BeautifulSoup library\n",
|
||||
" \"\"\"\n",
|
||||
" self.url = url\n",
|
||||
" response = requests.get(url, headers=headers)\n",
|
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n",
|
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n",
|
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
|
||||
" irrelevant.decompose()\n",
|
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Let's try one out. Change the website and add print statements to follow along.\n",
|
||||
"\n",
|
||||
"ed = Website(\"https://edwarddonner.com\")\n",
|
||||
"print(ed.title)\n",
|
||||
"print(ed.text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Types of prompts\n",
|
||||
"\n",
|
||||
"You may know this already - but if not, you will get very familiar with it!\n",
|
||||
"\n",
|
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
|
||||
"\n",
|
||||
"They expect to receive:\n",
|
||||
"\n",
|
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
|
||||
"\n",
|
||||
"**A user prompt** -- the conversation starter that they should reply to"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
|
||||
"\n",
|
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
|
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n",
|
||||
"Respond in markdown.\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n",
|
||||
"\n",
|
||||
"def user_prompt_for(website):\n",
|
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
|
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
|
||||
"please provide a short summary of this website in markdown. \\\n",
|
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
|
||||
" user_prompt += website.text\n",
|
||||
" return user_prompt"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(user_prompt_for(ed))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Messages\n",
|
||||
"\n",
|
||||
"The API from OpenAI expects to receive messages in a particular structure.\n",
|
||||
"Many of the other APIs share this structure:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"[\n",
|
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
|
||||
"]\n",
|
||||
"```\n",
|
||||
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the mighty GPT (yet!)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# To give you a preview -- calling OpenAI with system and user messages:\n",
|
||||
"\n",
|
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
|
||||
"print(response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## And now let's build useful messages for GPT-4o-mini, using a function"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# See how this function creates exactly the format above\n",
|
||||
"\n",
|
||||
"def messages_for(website):\n",
|
||||
" return [\n",
|
||||
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
|
||||
" ]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Try this out, and then try for a few more websites\n",
|
||||
"\n",
|
||||
"messages_for(ed)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Time to bring it together - the API for OpenAI is very simple!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# And now: call the OpenAI API. You will get very familiar with this!\n",
|
||||
"\n",
|
||||
"def summarize(url):\n",
|
||||
" website = Website(url)\n",
|
||||
" response = openai.chat.completions.create(\n",
|
||||
" model = \"gpt-4o-mini\",\n",
|
||||
" messages = messages_for(website)\n",
|
||||
" )\n",
|
||||
" return response.choices[0].message.content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"summarize(\"https://edwarddonner.com\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# A function to display this nicely in the Jupyter output, using markdown\n",
|
||||
"\n",
|
||||
"def display_summary(url):\n",
|
||||
" summary = summarize(url)\n",
|
||||
" display(Markdown(summary))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"display_summary(\"https://edwarddonner.com\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Let's try more websites\n",
|
||||
"\n",
|
||||
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
|
||||
"\n",
|
||||
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
|
||||
"\n",
|
||||
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
|
||||
"\n",
|
||||
"But many websites will work just fine!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"display_summary(\"https://cnn.com\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"display_summary(\"https://anthropic.com\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n",
|
||||
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
|
||||
"\n",
|
||||
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>\n",
|
||||
"\n",
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
|
||||
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Step 1: Create your prompts\n",
|
||||
"\n",
|
||||
"system_prompt = \"\"\"\n",
|
||||
"You are an assistant that creates short clear concise and relevant email \n",
|
||||
"subject lines based on the content of the email\n",
|
||||
"\"\"\"\n",
|
||||
"user_prompt = \"\"\"\n",
|
||||
"Hi team,\n",
|
||||
"\n",
|
||||
"Just a quick update on our Q2 progress. We’ve exceeded our sales goals by 15% and customer satisfaction scores are up 10 points from last quarter. Kudos to everyone involved, especially the sales and support teams. Let’s keep this momentum going as we head into Q3.\n",
|
||||
"\n",
|
||||
"Best,\n",
|
||||
"Jeannine\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"# Step 2: Make the messages list\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" {\"role\":\"system\", \"content\":system_prompt},\n",
|
||||
" {\"role\":\"user\", \"content\":f\"Email:{user_prompt}/n/nGenerate a concise subject line for this email.\"}\n",
|
||||
"] # fill this in\n",
|
||||
"\n",
|
||||
"# Step 3: Call OpenAI\n",
|
||||
"\n",
|
||||
"response = openai.chat.completions.create(\n",
|
||||
" model=\"gpt-4o-mini\",\n",
|
||||
" messages=messages,\n",
|
||||
" temperature=0.5,\n",
|
||||
" max_tokens=20\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Step 4: print the result\n",
|
||||
"\n",
|
||||
"print(\"Suggested subject line: \", response.choices[0].message.content.strip())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## An extra exercise for those who enjoy web scraping\n",
|
||||
"\n",
|
||||
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Sharing your code\n",
|
||||
"\n",
|
||||
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
|
||||
"\n",
|
||||
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
|
||||
"\n",
|
||||
"Here are good instructions courtesy of an AI friend: \n",
|
||||
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f4484fcf-8b39-4c3f-9674-37970ed71988",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# A modified class to fetch and parse fully rendered pages: with ethically reduced CAPTCHA events\n",
|
||||
"class Website:\n",
|
||||
"\n",
|
||||
" def __init__(self, url):\n",
|
||||
" \"\"\"\n",
|
||||
" Create this Website object from the given url using Selenium and BeautifulSoup.\n",
|
||||
" Render JavaScript content and extract text from the page.\n",
|
||||
" \"\"\"\n",
|
||||
" self.url = url\n",
|
||||
"\n",
|
||||
" if not self._is_allowed_by_robots(url):\n",
|
||||
" print(f\"Warning: robots.txt does not explicitly allow webscraping of {url}. Proceeding anyway.\")\n",
|
||||
" self.text, self.title = self._scrape_content()\n",
|
||||
"\n",
|
||||
" # Check robots.txt if scraping is allowed\n",
|
||||
" def _is_allowed_by_robots(self, url, user_agent=\"*\"):\n",
|
||||
" parsed = urlparse(url)\n",
|
||||
" robots_url = f\"{parsed.scheme}://{parsed.netloc}/robots.txt\"\n",
|
||||
" rp = urllib.robotparser.RobotFileParser()\n",
|
||||
" rp.set_url(robots_url)\n",
|
||||
" try:\n",
|
||||
" rp.read()\n",
|
||||
" return rp.can_fetch(user_agent, url)\n",
|
||||
" except Exception:\n",
|
||||
" # If robots.txt is unreachable, assume permissable\n",
|
||||
" return True\n",
|
||||
"\n",
|
||||
" def _scrape_content(self, retries=3, wait_base=5):\n",
|
||||
" # List of user agents for rotation\n",
|
||||
" user_agents = [\n",
|
||||
" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\",\n",
|
||||
" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15\",\n",
|
||||
" \"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0\"\n",
|
||||
" ]\n",
|
||||
" # Rotate user agents infrequently\n",
|
||||
" selected_agent = random.choice(user_agents)\n",
|
||||
" \n",
|
||||
" # Set up headless Chrome options\n",
|
||||
" options = Options()\n",
|
||||
" options.add_argument(\"--headless=new\")\n",
|
||||
" options.add_argument(\"--disable-gpu\")\n",
|
||||
" options.add_argument(\"--no-sandbox\")\n",
|
||||
" options.add_argument(\"--disable-dev-shm-usage\")\n",
|
||||
" options.add_argument(f\"user-agent={selected_agent}\")\n",
|
||||
"\n",
|
||||
" # Try to bypass anti-bot protections with exponential backoff\n",
|
||||
" for attempt in range(retries):\n",
|
||||
" try:\n",
|
||||
" # Start browser\n",
|
||||
" service = Service(ChromeDriverManager().install())\n",
|
||||
" driver = webdriver.Chrome(service=service, options=options)\n",
|
||||
" driver.set_page_load_timeout(30)\n",
|
||||
" driver.get(self.url)\n",
|
||||
"\n",
|
||||
" # Mimick human browsing behavior with random time delay, without overloading the server\n",
|
||||
" time.sleep(random.uniform(6, 12))\n",
|
||||
" \n",
|
||||
" # Get the page source after rendering\n",
|
||||
" soup = BeautifulSoup(driver.page_source, 'html.parser')\n",
|
||||
" driver.quit()\n",
|
||||
"\n",
|
||||
" for tag in soup([\"script\", \"style\", \"img\", \"input\"]):\n",
|
||||
" tag.decompose()\n",
|
||||
" \n",
|
||||
" title = soup.title.string.strip() if soup.title and soup.title.string else \"No title found\"\n",
|
||||
" body = soup.body\n",
|
||||
" text = soup.body.get_text(separator=\"\\n\", strip=True) if body else \"No content found.\"\n",
|
||||
"\n",
|
||||
" return text, title\n",
|
||||
" \n",
|
||||
" except Exception as e:\n",
|
||||
" # Exponential backoff to avoid retry spamming on failure\n",
|
||||
" time.sleep(wait_base * (2 ** attempt)) \n",
|
||||
" continue\n",
|
||||
"\n",
|
||||
" raise Exception(\"Failed to retrieve content despite retries.\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"rendered_page = Website(\"https://openai.com\")\n",
|
||||
"print(\"\\nTitle: \", rendered_page.title)\n",
|
||||
"print(\"\\nText: \", rendered_page.text, \"\\n\")\n",
|
||||
"#print(\"\\nUser prompt: \", user_prompt_for(rendered_page), \"\\n\")\n",
|
||||
"#messages_for(rendered_page)\n",
|
||||
"#summarize(\"https://openai.com\")\n",
|
||||
"\n",
|
||||
"display_summary(\"https://openai.com\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "781119a4-844c-4e03-84bd-8b8f2200d86c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# With Async for multiple page scraping: using Selenium and Jupyter Labs\n",
|
||||
"import nest_asyncio # Required for Jupyter notebook\n",
|
||||
"import asyncio\n",
|
||||
"from concurrent.futures import ThreadPoolExecutor\n",
|
||||
"\n",
|
||||
"# Async-safe wrapper for multiple URLs: because Selenium is synchronous\n",
|
||||
"def scrape_sync(url):\n",
|
||||
" try:\n",
|
||||
" page = Website(url)\n",
|
||||
" return {\n",
|
||||
" \"url\": url,\n",
|
||||
" \"title\": page.title,\n",
|
||||
" \"text\": page.text,\n",
|
||||
" \"summary\": display_summary(url)\n",
|
||||
" }\n",
|
||||
" except Exception as e:\n",
|
||||
" return {\n",
|
||||
" \"url\": url,\n",
|
||||
" \"error\": str(e)\n",
|
||||
" }\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Async runner for multiple URLs\n",
|
||||
"async def scrape_multiple_async(urls, max_workers=4):\n",
|
||||
" loop = asyncio.get_running_loop()\n",
|
||||
" with ThreadPoolExecutor(max_workers=max_workers) as executor:\n",
|
||||
" futures = [\n",
|
||||
" loop.run_in_executor(executor, scrape_sync, url)\n",
|
||||
" for url in urls\n",
|
||||
" ]\n",
|
||||
" return await asyncio.gather(*futures)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Example async usage\n",
|
||||
"if __name__ == \"__main__\":\n",
|
||||
" urls_to_scrape = [\n",
|
||||
" \"https://www.investopedia.com/articles/active-trading/111115/why-all-worlds-top-10-companies-are-american.asp\",\n",
|
||||
" \"https://fortune.com/ranking/global500/\",\n",
|
||||
" \"http://en.wikipedia.org/wiki/List_of_largest_corporate_profits_and_losses\",\n",
|
||||
" ]\n",
|
||||
"\n",
|
||||
" async def run():\n",
|
||||
" results = await scrape_multiple_async(urls_to_scrape)\n",
|
||||
" for res in results:\n",
|
||||
" print(f\"\\nURL: {res.get('url')}\")\n",
|
||||
" print(f\"Title: {res.get('title', 'N/A')}\")\n",
|
||||
" print(f\"Preview:\\n{res.get('text', res.get('error', 'No content'))}\\n\")\n",
|
||||
"\n",
|
||||
" # Jupyter notebook already has a running event loop: asyncio.run() cannot be called from a running event loop\n",
|
||||
" nest_asyncio.apply()\n",
|
||||
" await run()\n",
|
||||
" "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "32fa56f2-f78e-421f-b35e-77fb9608d652",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -0,0 +1,626 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a98030af-fcd1-4d63-a36e-38ba053498fa",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# A full business solution\n",
|
||||
"\n",
|
||||
"## Now we will take our project from Day 1 to the next level\n",
|
||||
"\n",
|
||||
"### BUSINESS CHALLENGE:\n",
|
||||
"\n",
|
||||
"Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n",
|
||||
"\n",
|
||||
"We will be provided a company name and their primary website.\n",
|
||||
"\n",
|
||||
"See the end of this notebook for examples of real-world business applications.\n",
|
||||
"\n",
|
||||
"And remember: I'm always available if you have problems or ideas! Please do reach out."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d5b08506-dc8b-4443-9201-5f1848161363",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# imports\n",
|
||||
"# If these fail, please check you're running from an 'activated' environment with (llms) in the command prompt\n",
|
||||
"\n",
|
||||
"import os\n",
|
||||
"import requests\n",
|
||||
"import json\n",
|
||||
"from typing import List\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"from bs4 import BeautifulSoup\n",
|
||||
"from IPython.display import Markdown, display, update_display\n",
|
||||
"from openai import OpenAI\n",
|
||||
"from selenium import webdriver\n",
|
||||
"from selenium.webdriver.chrome.options import Options\n",
|
||||
"from selenium.webdriver.chrome.service import Service\n",
|
||||
"from selenium.webdriver.common.by import By\n",
|
||||
"from selenium.webdriver.support.ui import WebDriverWait\n",
|
||||
"from selenium.webdriver.support import expected_conditions as EC\n",
|
||||
"from webdriver_manager.chrome import ChromeDriverManager\n",
|
||||
"from urllib.parse import urlparse, urljoin\n",
|
||||
"import time\n",
|
||||
"import random\n",
|
||||
"import concurrent.futures"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Initialize and constants\n",
|
||||
"\n",
|
||||
"load_dotenv(override=True)\n",
|
||||
"api_key = os.getenv('OPENAI_API_KEY')\n",
|
||||
"\n",
|
||||
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
|
||||
" print(\"API key looks good so far\")\n",
|
||||
"else:\n",
|
||||
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
|
||||
" \n",
|
||||
"MODEL = 'gpt-4o-mini'\n",
|
||||
"openai = OpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "106dd65e-90af-4ca8-86b6-23a41840645b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# A class to represent a Webpage\n",
|
||||
"\n",
|
||||
"# Some websites need you to use proper headers when fetching them:\n",
|
||||
"headers = {\n",
|
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"class Website:\n",
|
||||
" \"\"\"\n",
|
||||
" A utility class to represent a Website that we have scraped, now with links\n",
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
" def __init__(self, url):\n",
|
||||
" self.url = url\n",
|
||||
" response = requests.get(url, headers=headers)\n",
|
||||
" self.body = response.content\n",
|
||||
" soup = BeautifulSoup(self.body, 'html.parser')\n",
|
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n",
|
||||
" if soup.body:\n",
|
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
|
||||
" irrelevant.decompose()\n",
|
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
|
||||
" else:\n",
|
||||
" self.text = \"\"\n",
|
||||
" links = [link.get('href') for link in soup.find_all('a')]\n",
|
||||
" self.links = [link for link in links if link]\n",
|
||||
"\n",
|
||||
" def get_contents(self):\n",
|
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# A modified class to fetch and parse fully rendered pages\n",
|
||||
"class NewWebsite:\n",
|
||||
" shared_driver = None # Class variable to share browser instance\n",
|
||||
"\n",
|
||||
" def __init__(self, url, driver=None):\n",
|
||||
" self.url = url\n",
|
||||
" self.driver = driver or NewWebsite._get_shared_driver()\n",
|
||||
" self.text, self.title, self.links = self._scrape_content()\n",
|
||||
" \n",
|
||||
" @classmethod\n",
|
||||
" def _get_shared_driver(cls):\n",
|
||||
" if cls.shared_driver is None:\n",
|
||||
" # Set up headless Chrome options\n",
|
||||
" options = Options()\n",
|
||||
" options.add_argument(\"--headless=new\")\n",
|
||||
" options.add_argument(\"--disable-gpu\")\n",
|
||||
" options.add_argument(\"--no-sandbox\")\n",
|
||||
" options.add_argument(\"--disable-dev-shm-usage\")\n",
|
||||
" options.add_argument(\"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\")\n",
|
||||
"\n",
|
||||
" service = Service(ChromeDriverManager().install())\n",
|
||||
" cls.shared_driver = webdriver.Chrome(service=service, options=options)\n",
|
||||
" return cls.shared_driver\n",
|
||||
"\n",
|
||||
" def _scrape_content(self):\n",
|
||||
" try:\n",
|
||||
" self.driver.get(self.url)\n",
|
||||
" # Mimick human browsing behavior without overloading the server\n",
|
||||
" WebDriverWait(self.driver, 15).until(EC.presence_of_element_located((By.TAG_NAME, \"a\")))\n",
|
||||
" # Allow JS-rendered content to settle\n",
|
||||
" time.sleep(2)\n",
|
||||
"\n",
|
||||
" # Get the page source after rendering\n",
|
||||
" soup = BeautifulSoup(self.driver.page_source, \"html.parser\")\n",
|
||||
" \n",
|
||||
" for tag in soup([\"script\", \"style\", \"img\", \"input\"]):\n",
|
||||
" tag.decompose()\n",
|
||||
" \n",
|
||||
" title = soup.title.string.strip() if soup.title and soup.title.string else \"No title found\"\n",
|
||||
" body = soup.body\n",
|
||||
" text = soup.body.get_text(separator=\"\\n\", strip=True) if body else \"No content found.\"\n",
|
||||
"\n",
|
||||
" # Extract and clean links\n",
|
||||
" links = []\n",
|
||||
" for link_tag in soup.find_all(\"a\", href=True):\n",
|
||||
" href = link_tag[\"href\"].strip()\n",
|
||||
" if href and not href.startswith((\"mailto:\", \"tel:\", \"javascript:\")):\n",
|
||||
" full_url = urljoin(self.url, href)\n",
|
||||
" links.append(full_url)\n",
|
||||
" \n",
|
||||
" return text, title, links\n",
|
||||
" \n",
|
||||
" except Exception as e:\n",
|
||||
" return \"Error loading content\", \"Error\", []\n",
|
||||
"\n",
|
||||
" def get_contents(self):\n",
|
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"\n",
|
||||
"\n",
|
||||
" # Close the driver\n",
|
||||
" @classmethod\n",
|
||||
" def close_driver(cls):\n",
|
||||
" if cls.shared_driver:\n",
|
||||
" cls.shared_driver.quit()\n",
|
||||
" cls.shared_driver = None\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e30d8128-933b-44cc-81c8-ab4c9d86589a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"cardiff = NewWebsite(\"https://cardiff.co/\")\n",
|
||||
"cardiff.links"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1771af9c-717a-4fca-bbbe-8a95893312c3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## First step: Have GPT-4o-mini figure out which links are relevant\n",
|
||||
"\n",
|
||||
"### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n",
|
||||
"It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n",
|
||||
"We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n",
|
||||
"\n",
|
||||
"This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n",
|
||||
"\n",
|
||||
"Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6957b079-0d96-45f7-a26a-3487510e9b35",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n",
|
||||
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
|
||||
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
|
||||
"link_system_prompt += \"You should respond in JSON as in this example:\"\n",
|
||||
"link_system_prompt += \"\"\"\n",
|
||||
"{\n",
|
||||
" \"links\": [\n",
|
||||
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
|
||||
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
|
||||
" ]\n",
|
||||
"}\n",
|
||||
"\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b97e4068-97ed-4120-beae-c42105e4d59a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(link_system_prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8e1f601b-2eaf-499d-b6b8-c99050c9d6b3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_links_user_prompt(website):\n",
|
||||
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
|
||||
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
|
||||
"Do not include Terms of Service, Privacy, email links.\\n\"\n",
|
||||
" user_prompt += \"Links (some might be relative links):\\n\"\n",
|
||||
" user_prompt += \"\\n\".join(website.links)\n",
|
||||
" return user_prompt\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6bcbfa78-6395-4685-b92c-22d592050fd7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(get_links_user_prompt(cardiff))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a29aca19-ca13-471c-a4b4-5abbfa813f69",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_links(url):\n",
|
||||
" website = Website(url)\n",
|
||||
" response = openai.chat.completions.create(\n",
|
||||
" model=MODEL,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
|
||||
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
|
||||
" ],\n",
|
||||
" response_format={\"type\": \"json_object\"}\n",
|
||||
" )\n",
|
||||
" result = response.choices[0].message.content\n",
|
||||
" return json.loads(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "74a827a0-2782-4ae5-b210-4a242a8b4cc2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n",
|
||||
"\n",
|
||||
"huggingface = Website(\"https://huggingface.co\")\n",
|
||||
"huggingface.links"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"get_links(\"https://cardiff.co\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0d74128e-dfb6-47ec-9549-288b621c838c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Second step: make the brochure!\n",
|
||||
"\n",
|
||||
"Assemble all the details into another prompt to GPT4-o"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "85a5b6e2-e7ef-44a9-bc7f-59ede71037b5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_all_details(url):\n",
|
||||
" result = \"Landing page:\\n\"\n",
|
||||
" result += Website(url).get_contents()\n",
|
||||
" links = get_links(url)\n",
|
||||
" print(\"Found links:\", links)\n",
|
||||
" for link in links[\"links\"]:\n",
|
||||
" result += f\"\\n\\n{link['type']}\\n\"\n",
|
||||
" result += Website(link[\"url\"]).get_contents()\n",
|
||||
" return result\n",
|
||||
"\n",
|
||||
"def get_all_details_rendered(url):\n",
|
||||
" result = \"Landing page:\\n\"\n",
|
||||
" result += NewWebsite(url).get_contents()\n",
|
||||
" links = get_links(url)\n",
|
||||
" print(\"Found links:\", links)\n",
|
||||
"\n",
|
||||
" for link in links[\"links\"]:\n",
|
||||
" result += f\"\\n\\n{link['type']}\\n\"\n",
|
||||
" result += NewWebsite(link[\"url\"]).get_contents()\n",
|
||||
"\n",
|
||||
" # Important: close browser after all scraping is done\n",
|
||||
" NewWebsite.close_driver()\n",
|
||||
" return result\n",
|
||||
"\n",
|
||||
"def scrape_link(link):\n",
|
||||
" try:\n",
|
||||
" page = NewWebsite(link[\"url\"])\n",
|
||||
" return f\"\\n\\n{link['type']}\\n{page.get_contents()}\"\n",
|
||||
" except Exception as e:\n",
|
||||
" return f\"\\n\\n{link['type']}\\nError loading page: {e}\"\n",
|
||||
"\n",
|
||||
"# Threaded scraper for linked pages\n",
|
||||
"def get_all_details_rendered_concurrently(url):\n",
|
||||
" result = \"Landing page:\\n\"\n",
|
||||
" result += NewWebsite(url).get_contents()\n",
|
||||
"\n",
|
||||
" # LLM-filtered link generator\n",
|
||||
" links = get_links(url)\n",
|
||||
" print(\"Found links:\", links)\n",
|
||||
"\n",
|
||||
" with concurrent.futures.ThreadPoolExecutor() as executor:\n",
|
||||
" future_to_link = {executor.submit(scrape_link, link): link for link in links[\"links\"]}\n",
|
||||
" for future in concurrent.futures.as_completed(future_to_link):\n",
|
||||
" result += future.result()\n",
|
||||
"\n",
|
||||
" # Close shared browser\n",
|
||||
" NewWebsite.close_driver()\n",
|
||||
" return result"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5099bd14-076d-4745-baf3-dac08d8e5ab2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(get_all_details_rendered_concurrently(\"https://cardiff.co\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9b863a55-f86c-4e3f-8a79-94e24c1a8cf2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
|
||||
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
|
||||
"Include details of company culture, customers and careers/jobs if you have the information.\"\n",
|
||||
"\n",
|
||||
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n",
|
||||
"\n",
|
||||
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
|
||||
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
|
||||
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6ab83d92-d36b-4ce0-8bcc-5bb4c2f8ff23",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_brochure_user_prompt(company_name, url):\n",
|
||||
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
|
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n",
|
||||
" #user_prompt += get_all_details(url)\n",
|
||||
" user_prompt += get_all_details_rendered_concurrently(url)\n",
|
||||
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n",
|
||||
" return user_prompt"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cd909e0b-1312-4ce2-a553-821e795d7572",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"get_brochure_user_prompt(\"Cardiff\", \"https://cardiff.co\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e44de579-4a1a-4e6a-a510-20ea3e4b8d46",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def create_brochure(company_name, url):\n",
|
||||
" response = openai.chat.completions.create(\n",
|
||||
" model=MODEL,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
|
||||
" ],\n",
|
||||
" )\n",
|
||||
" result = response.choices[0].message.content\n",
|
||||
" display(Markdown(result))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e093444a-9407-42ae-924a-145730591a39",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"create_brochure(\"Cardiff\", \"https://cardiff.co\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "61eaaab7-0b47-4b29-82d4-75d474ad8d18",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Finally - a minor improvement\n",
|
||||
"\n",
|
||||
"With a small adjustment, we can change this so that the results stream back from OpenAI,\n",
|
||||
"with the familiar typewriter animation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "51db0e49-f261-4137-aabe-92dd601f7725",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def stream_brochure(company_name, url):\n",
|
||||
" stream = openai.chat.completions.create(\n",
|
||||
" model=MODEL,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
|
||||
" ],\n",
|
||||
" stream=True\n",
|
||||
" )\n",
|
||||
" \n",
|
||||
" response = \"\"\n",
|
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n",
|
||||
" for chunk in stream:\n",
|
||||
" response += chunk.choices[0].delta.content or ''\n",
|
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
|
||||
" update_display(Markdown(response), display_id=display_handle.display_id)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"stream_brochure(\"Cardiff\", \"https://cardiff.co\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "fdb3f8d8-a3eb-41c8-b1aa-9f60686a653b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Try changing the system prompt to the humorous version when you make the Brochure for Hugging Face:\n",
|
||||
"\n",
|
||||
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a27bf9e0-665f-4645-b66b-9725e2a959b5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n",
|
||||
" <span style=\"color:#181;\">In this exercise we extended the Day 1 code to make multiple LLM calls, and generate a document.\n",
|
||||
"\n",
|
||||
"This is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n",
|
||||
"\n",
|
||||
"Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype. See what other students have done in the community-contributions folder -- so many valuable projects -- it's wild!</span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "14b2454b-8ef8-4b5c-b928-053a15e0d553",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#900;\">Before you move to Week 2 (which is tons of fun)</h2>\n",
|
||||
" <span style=\"color:#900;\">Please see the week1 EXERCISE notebook for your challenge for the end of week 1. This will give you some essential practice working with Frontier APIs, and prepare you well for Week 2.</span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "17b64f0f-7d33-4493-985a-033d06e8db08",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#f71;\">A reminder on 3 useful resources</h2>\n",
|
||||
" <span style=\"color:#f71;\">1. The resources for the course are available <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">here.</a><br/>\n",
|
||||
" 2. I'm on LinkedIn <a href=\"https://www.linkedin.com/in/eddonner/\">here</a> and I love connecting with people taking the course!<br/>\n",
|
||||
" 3. I'm trying out X/Twitter and I'm at <a href=\"https://x.com/edwarddonner\">@edwarddonner<a> and hoping people will teach me how it's done.. \n",
|
||||
" </span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6f48e42e-fa7a-495f-a5d4-26bfc24d60b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../thankyou.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#090;\">Finally! I have a special request for you</h2>\n",
|
||||
" <span style=\"color:#090;\">\n",
|
||||
" My editor tells me that it makes a MASSIVE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. If you're able to take a minute to rate this, I'd be so very grateful! And regardless - always please reach out to me at ed@edwarddonner.com if I can help at any point.\n",
|
||||
" </span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b8d3e1a1-ba54-4907-97c5-30f89a24775b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -0,0 +1,349 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# End of week 1 exercise\n",
|
||||
"\n",
|
||||
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
|
||||
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c1070317-3ed9-4659-abe3-828943230e03",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# imports\n",
|
||||
"\n",
|
||||
"import os\n",
|
||||
"import requests\n",
|
||||
"import json\n",
|
||||
"from typing import List\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"from bs4 import BeautifulSoup\n",
|
||||
"from IPython.display import Markdown, display, update_display, Image\n",
|
||||
"from openai import OpenAI\n",
|
||||
"from selenium import webdriver\n",
|
||||
"from selenium.webdriver.chrome.options import Options\n",
|
||||
"from selenium.webdriver.chrome.service import Service\n",
|
||||
"from selenium.webdriver.common.by import By\n",
|
||||
"from selenium.webdriver.support.ui import WebDriverWait\n",
|
||||
"from selenium.webdriver.support import expected_conditions as EC\n",
|
||||
"from webdriver_manager.chrome import ChromeDriverManager\n",
|
||||
"from urllib.parse import urlparse, urljoin\n",
|
||||
"import time\n",
|
||||
"import random\n",
|
||||
"import concurrent.futures\n",
|
||||
"import re"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# constants\n",
|
||||
"\n",
|
||||
"MODEL = 'gpt-4o-mini'\n",
|
||||
"openai = OpenAI()\n",
|
||||
"MODEL_GPT = 'gpt-4o-mini'\n",
|
||||
"MODEL_LLAMA = 'llama3.2'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# set up environment\n",
|
||||
"\n",
|
||||
"# A modified class to fetch and parse fully rendered pages\n",
|
||||
"class NewWebsite:\n",
|
||||
" shared_driver = None # Class variable to share browser instance\n",
|
||||
"\n",
|
||||
" def __init__(self, url, driver=None):\n",
|
||||
" self.url = url\n",
|
||||
" self.driver = driver or NewWebsite._get_shared_driver()\n",
|
||||
" self.text, self.title, self.links = self._scrape_content()\n",
|
||||
" \n",
|
||||
" @classmethod\n",
|
||||
" def _get_shared_driver(cls):\n",
|
||||
" if cls.shared_driver is None:\n",
|
||||
" # Set up headless Chrome options\n",
|
||||
" options = Options()\n",
|
||||
" options.add_argument(\"--headless=new\")\n",
|
||||
" options.add_argument(\"--disable-gpu\")\n",
|
||||
" options.add_argument(\"--no-sandbox\")\n",
|
||||
" options.add_argument(\"--disable-dev-shm-usage\")\n",
|
||||
" options.add_argument(\"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\")\n",
|
||||
"\n",
|
||||
" service = Service(ChromeDriverManager().install())\n",
|
||||
" cls.shared_driver = webdriver.Chrome(service=service, options=options)\n",
|
||||
" return cls.shared_driver\n",
|
||||
"\n",
|
||||
" def _scrape_content(self):\n",
|
||||
" try:\n",
|
||||
" self.driver.get(self.url)\n",
|
||||
" # Mimick human browsing behavior without overloading the server\n",
|
||||
" WebDriverWait(self.driver, 15).until(EC.presence_of_element_located((By.TAG_NAME, \"a\")))\n",
|
||||
" # Allow JS-rendered content to settle\n",
|
||||
" time.sleep(2)\n",
|
||||
"\n",
|
||||
" # Get the page source after rendering\n",
|
||||
" soup = BeautifulSoup(self.driver.page_source, \"html.parser\")\n",
|
||||
" \n",
|
||||
" for tag in soup([\"script\", \"style\", \"img\", \"input\"]):\n",
|
||||
" tag.decompose()\n",
|
||||
" \n",
|
||||
" title = soup.title.string.strip() if soup.title and soup.title.string else \"No title found\"\n",
|
||||
" body = soup.body\n",
|
||||
" text = soup.body.get_text(separator=\"\\n\", strip=True) if body else \"No content found.\"\n",
|
||||
"\n",
|
||||
" # Extract and clean links\n",
|
||||
" links = []\n",
|
||||
" for link_tag in soup.find_all(\"a\", href=True):\n",
|
||||
" href = link_tag[\"href\"].strip()\n",
|
||||
" if href and not href.startswith((\"mailto:\", \"tel:\", \"javascript:\")):\n",
|
||||
" full_url = urljoin(self.url, href)\n",
|
||||
" links.append(full_url)\n",
|
||||
" \n",
|
||||
" return text, title, links\n",
|
||||
" \n",
|
||||
" except Exception as e:\n",
|
||||
" return \"Error loading content\", \"Error\", []\n",
|
||||
"\n",
|
||||
" def get_contents(self):\n",
|
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"\n",
|
||||
"\n",
|
||||
" # Close the driver\n",
|
||||
" @classmethod\n",
|
||||
" def close_driver(cls):\n",
|
||||
" if cls.shared_driver:\n",
|
||||
" cls.shared_driver.quit()\n",
|
||||
" cls.shared_driver = None\n",
|
||||
"\n",
|
||||
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n",
|
||||
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
|
||||
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
|
||||
"link_system_prompt += \"You should respond in JSON as in this example:\"\n",
|
||||
"link_system_prompt += \"\"\"\n",
|
||||
"{\n",
|
||||
" \"links\": [\n",
|
||||
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
|
||||
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
|
||||
" ]\n",
|
||||
"}\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"def get_links_user_prompt(website):\n",
|
||||
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
|
||||
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
|
||||
"Do not include Terms of Service, Privacy, email links.\\n\"\n",
|
||||
" user_prompt += \"Links (some might be relative links):\\n\"\n",
|
||||
" user_prompt += \"\\n\".join(website.links)\n",
|
||||
" return user_prompt\n",
|
||||
"\n",
|
||||
"def get_links(url):\n",
|
||||
" website = NewWebsite(url)\n",
|
||||
" response = openai.chat.completions.create(\n",
|
||||
" model=MODEL,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
|
||||
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
|
||||
" ],\n",
|
||||
" response_format={\"type\": \"json_object\"}\n",
|
||||
" )\n",
|
||||
" result = response.choices[0].message.content\n",
|
||||
" return json.loads(result)\n",
|
||||
"\n",
|
||||
"def scrape_link(link):\n",
|
||||
" try:\n",
|
||||
" page = NewWebsite(link[\"url\"])\n",
|
||||
" return f\"\\n\\n{link['type']}\\n{page.get_contents()}\"\n",
|
||||
" except Exception as e:\n",
|
||||
" return f\"\\n\\n{link['type']}\\nError loading page: {e}\"\n",
|
||||
"\n",
|
||||
"# Threaded scraper for linked pages\n",
|
||||
"def get_all_details_rendered_concurrently(url):\n",
|
||||
" result = \"Landing page:\\n\"\n",
|
||||
" result += NewWebsite(url).get_contents()\n",
|
||||
"\n",
|
||||
" # LLM-filtered link generator\n",
|
||||
" links = get_links(url)\n",
|
||||
" print(\"Found links:\", links)\n",
|
||||
"\n",
|
||||
" with concurrent.futures.ThreadPoolExecutor() as executor:\n",
|
||||
" future_to_link = {executor.submit(scrape_link, link): link for link in links[\"links\"]}\n",
|
||||
" for future in concurrent.futures.as_completed(future_to_link):\n",
|
||||
" result += future.result()\n",
|
||||
"\n",
|
||||
" # Close shared browser\n",
|
||||
" NewWebsite.close_driver()\n",
|
||||
" return result\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# here is the question; type over this to ask something new\n",
|
||||
"\n",
|
||||
"system_prompt = \"You are an LLM Engineer that analyzes the contents of several relevant pages from a company website \\\n",
|
||||
"rewrites internal tools and systems and rebuilds them end-to-end, starting from scratch. Starting with the online application at cardiff.co/apply, \\\n",
|
||||
"Tell me why you're best suited to be the lead of this project and work with our 12 year resident developer to implement a \\\n",
|
||||
"state of the art solution in record time. Include backend architecture, model orchestration, how you handle latency, cost and user experience, \\\n",
|
||||
"and details of how you would achieve this goal based on company culture and industries served if you have the information, \\\n",
|
||||
"and walk me through the details like you're explaining it to a sharp product owner. Respond in markdown.\"\\\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def get_solution_user_prompt(company_name, url):\n",
|
||||
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
|
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a solution to rewrite the company's application in markdown.\\n\"\n",
|
||||
" #user_prompt += get_all_details(url)\n",
|
||||
" user_prompt += get_all_details_rendered_concurrently(url)\n",
|
||||
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n",
|
||||
" return user_prompt\n",
|
||||
"\n",
|
||||
"def create_solution(company_name, url):\n",
|
||||
" response = openai.chat.completions.create(\n",
|
||||
" model=MODEL,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
||||
" {\"role\": \"user\", \"content\": get_solution_user_prompt(company_name, url)}\n",
|
||||
" ],\n",
|
||||
" )\n",
|
||||
" result = response.choices[0].message.content\n",
|
||||
" display(Markdown(result))\n",
|
||||
"\n",
|
||||
" return result\n",
|
||||
"\n",
|
||||
"#create_solution(\"Cardiff\", \"https://cardiff.co\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Get gpt-4o-mini to answer, with streaming\n",
|
||||
"\n",
|
||||
"new_system_prompt = \"You are a Senior Engineer that analyzes the planned solution given to you for a company website \\\n",
|
||||
"and you rewrite code for rebuilding internal tools and systems end-to-end based on the proposed solutions. \\\n",
|
||||
"Start with the online application at cardiff.co/apply, use canvas and write code for the proposed solution \\\n",
|
||||
"in the appropriate language that best suits the task for backend architecture, model orchestration, how you handle latency, cost and user experience wherever possible.\"\n",
|
||||
"\n",
|
||||
"output_dir = \"cardiff_rebuild_output\"\n",
|
||||
"os.makedirs(output_dir, exist_ok=True)\n",
|
||||
"\n",
|
||||
"def save_code_blocks(markdown_text, base_filename=\"cardiff_code\"):\n",
|
||||
" output_dir = \"cardiff_rebuild_output\"\n",
|
||||
" os.makedirs(output_dir, exist_ok=True)\n",
|
||||
" \n",
|
||||
" code_blocks = re.findall(r\"```(.*?)\\n(.*?)```\", markdown_text, re.DOTALL)\n",
|
||||
" saved_files = []\n",
|
||||
"\n",
|
||||
" for idx, (language, code) in enumerate(code_blocks, 1):\n",
|
||||
" ext = language.strip() if language else \"txt\"\n",
|
||||
" filename = f\"{base_filename}_part{idx}.{ext}\"\n",
|
||||
" filepath = os.path.join(output_dir, filename)\n",
|
||||
" with open(filepath, \"w\", encoding=\"utf-8\") as f:\n",
|
||||
" f.write(code)\n",
|
||||
" saved_files.append(filepath)\n",
|
||||
"\n",
|
||||
" return saved_files\n",
|
||||
"\n",
|
||||
"def develop_from_proposal(proposal_text, company_name):\n",
|
||||
" # Stream code generation from GPT-4o\n",
|
||||
" system = \"You are a senior software engineer. Use the following proposal to generate production-ready code to \\\n",
|
||||
" implement the backend, frontend, and any orchestration described. Write clean, documented code in markdown format.\"\n",
|
||||
" \n",
|
||||
" stream = openai.chat.completions.create(\n",
|
||||
" model=\"gpt-4o\",\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": system},\n",
|
||||
" {\"role\": \"user\", \"content\": proposal_text}\n",
|
||||
" ],\n",
|
||||
" stream=True\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" response = \"\"\n",
|
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n",
|
||||
" for chunk in stream:\n",
|
||||
" content = chunk.choices[0].delta.content or \"\"\n",
|
||||
" response += content\n",
|
||||
" update_display(Markdown(response), display_id=display_handle.display_id)\n",
|
||||
"\n",
|
||||
" saved_files = save_code_blocks(response)\n",
|
||||
" \n",
|
||||
" # Generate a UI design mockup image\n",
|
||||
" image_prompt = f\"A modern, mobile-friendly UI wireframe for a business loan application system for {company_name}. Clean layout, input fields for business name, revenue, loan amount, industry, and contact info. Includes a step-by-step progress bar, submit button, and secure branding.\"\n",
|
||||
" \n",
|
||||
" img_response = openai.images.generate(\n",
|
||||
" model=\"dall-e-3\",\n",
|
||||
" prompt=image_prompt,\n",
|
||||
" n=1,\n",
|
||||
" size=\"1024x1024\"\n",
|
||||
" )\n",
|
||||
" \n",
|
||||
" image_url = img_response.data[0].url\n",
|
||||
" img_path = os.path.join(output_dir, f\"{company_name.lower()}_ui_mockup.png\")\n",
|
||||
" with open(img_path, 'wb') as handler:\n",
|
||||
" handler.write(requests.get(image_url).content)\n",
|
||||
"\n",
|
||||
" print(\"Code files saved to:\", saved_files)\n",
|
||||
" print(\"UI mockup saved at:\", img_path)\n",
|
||||
"\n",
|
||||
" display(Markdown(\"### Proposed UI Design\"))\n",
|
||||
" display(Image(url=image_url))\n",
|
||||
"\n",
|
||||
"proposal = create_solution(\"Cardiff\", \"https://cardiff.co\")\n",
|
||||
"develop_from_proposal(proposal, \"Cardiff\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Get Llama 3.2 to answer"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
Reference in New Issue
Block a user