{ "cells": [ { "cell_type": "markdown", "id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927", "metadata": {}, "source": [ "# Welcome to Week 2!\n", "\n", "## Frontier Model APIs\n", "\n", "In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n", "\n", "Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI." ] }, { "cell_type": "markdown", "id": "2b268b6e-0ba4-461e-af86-74a41f4d681f", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Important Note - Please read me

\n", " I'm continually improving these labs, adding more examples and exercises.\n", " At the start of each week, it's worth checking you have the latest code.
\n", " First do a git pull and merge your changes as needed. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!

\n", " After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:
\n", " conda env update --f environment.yml
\n", " Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):
\n", " pip install -r requirements.txt\n", "
Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n", "
\n", "
\n", "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Reminder about the resources page

\n", " Here's a link to resources for the course. This includes links to all the slides.
\n", " https://edwarddonner.com/2024/11/13/llm-engineering-resources/
\n", " Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", "
\n", "
" ] }, { "cell_type": "markdown", "id": "85cfe275-4705-4d30-abea-643fbddf1db0", "metadata": {}, "source": [ "## Setting up your keys\n", "\n", "If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n", "\n", "**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n", "\n", "For OpenAI, visit https://openai.com/api/ \n", "For Anthropic, visit https://console.anthropic.com/ \n", "For Google, visit https://ai.google.dev/gemini-api \n", "\n", "### Also - adding DeepSeek if you wish\n", "\n", "Optionally, if you'd like to also use DeepSeek, create an account [here](https://platform.deepseek.com/), create a key [here](https://platform.deepseek.com/api_keys) and top up with at least the minimum $2 [here](https://platform.deepseek.com/top_up).\n", "\n", "### Adding API keys to your .env file\n", "\n", "When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n", "\n", "```\n", "OPENAI_API_KEY=xxxx\n", "ANTHROPIC_API_KEY=xxxx\n", "GOOGLE_API_KEY=xxxx\n", "DEEPSEEK_API_KEY=xxxx\n", "```\n", "\n", "Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top." ] }, { "cell_type": "code", "execution_count": 1, "id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6", "metadata": {}, "outputs": [], "source": [ "# imports\n", "\n", "import os\n", "from dotenv import load_dotenv\n", "from openai import OpenAI\n", "import anthropic\n", "from IPython.display import Markdown, display, update_display" ] }, { "cell_type": "code", "execution_count": 2, "id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36", "metadata": {}, "outputs": [], "source": [ "# import for google\n", "# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n", "# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n", "\n", "import google.generativeai" ] }, { "cell_type": "code", "execution_count": 3, "id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "OpenAI API Key exists and begins sk-proj-\n", "Anthropic API Key exists and begins sk-ant-\n", "Google API Key exists and begins AIzaSyCV\n" ] } ], "source": [ "# Load environment variables in a file called .env\n", "# Print the key prefixes to help with any debugging\n", "\n", "load_dotenv(override=True)\n", "openai_api_key = os.getenv('OPENAI_API_KEY')\n", "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", "google_api_key = os.getenv('GOOGLE_API_KEY')\n", "\n", "if openai_api_key:\n", " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", "else:\n", " print(\"OpenAI API Key not set\")\n", " \n", "if anthropic_api_key:\n", " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", "else:\n", " print(\"Anthropic API Key not set\")\n", "\n", "if google_api_key:\n", " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", "else:\n", " print(\"Google API Key not set\")" ] }, { "cell_type": "code", "execution_count": 4, "id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0", "metadata": {}, "outputs": [], "source": [ "# Connect to OpenAI, Anthropic\n", "\n", "openai = OpenAI()\n", "\n", "claude = anthropic.Anthropic()" ] }, { "cell_type": "code", "execution_count": 5, "id": "425ed580-808d-429b-85b0-6cba50ca1d0c", "metadata": {}, "outputs": [], "source": [ "# This is the set up code for Gemini\n", "# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n", "\n", "google.generativeai.configure()" ] }, { "cell_type": "markdown", "id": "42f77b59-2fb1-462a-b90d-78994e4cef33", "metadata": {}, "source": [ "## Asking LLMs to tell a joke\n", "\n", "It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n", "Later we will be putting LLMs to better use!\n", "\n", "### What information is included in the API\n", "\n", "Typically we'll pass to the API:\n", "- The name of the model that should be used\n", "- A system message that gives overall context for the role the LLM is playing\n", "- A user message that provides the actual prompt\n", "\n", "There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic." ] }, { "cell_type": "code", "execution_count": 6, "id": "378a0296-59a2-45c6-82eb-941344d3eeff", "metadata": {}, "outputs": [], "source": [ "system_message = \"You are an assistant that is great at telling jokes\"\n", "user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\"" ] }, { "cell_type": "code", "execution_count": 7, "id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4", "metadata": {}, "outputs": [], "source": [ "prompts = [\n", " {\"role\": \"system\", \"content\": system_message},\n", " {\"role\": \"user\", \"content\": user_prompt}\n", " ]" ] }, { "cell_type": "code", "execution_count": 8, "id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why did the data scientist break up with their computer? \n", "\n", "Because it had too many commitment issues - always crashing!\n" ] } ], "source": [ "# GPT-3.5-Turbo\n", "\n", "completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n", "print(completion.choices[0].message.content)" ] }, { "cell_type": "code", "execution_count": 9, "id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why did the data scientist bring a ladder to work?\n", "\n", "Because they wanted to reach new heights in their analysis!\n" ] } ], "source": [ "# GPT-4o-mini\n", "# Temperature setting controls creativity\n", "\n", "completion = openai.chat.completions.create(\n", " model='gpt-4o-mini',\n", " messages=prompts,\n", " temperature=0.7\n", ")\n", "print(completion.choices[0].message.content)" ] }, { "cell_type": "code", "execution_count": 10, "id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why do data scientists love nature hikes?\n", "\n", "Because they can't resist a good outlier!\n" ] } ], "source": [ "# GPT-4o\n", "\n", "completion = openai.chat.completions.create(\n", " model='gpt-4o',\n", " messages=prompts,\n", " temperature=0.4\n", ")\n", "print(completion.choices[0].message.content)" ] }, { "cell_type": "code", "execution_count": 11, "id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Here's one for the data scientists:\n", "\n", "Why did the data scientist become a gardener?\n", "\n", "Because they were really good at pruning decision trees! 🌳\n", "\n", "Alternative jokes:\n", "\n", "1. What's a data scientist's favorite kind of music?\n", " Algorithm and blues!\n", "\n", "2. Why don't data scientists trust atoms?\n", " Because they make up everything... just like some of their training data!\n", "\n", "3. What did the data scientist say when they got locked out?\n", " Time to use the random forest to find the key features!\n" ] } ], "source": [ "# Claude 3.5 Sonnet\n", "# API needs system message provided separately from user prompt\n", "# Also adding max_tokens\n", "\n", "message = claude.messages.create(\n", " model=\"claude-3-5-sonnet-latest\",\n", " max_tokens=200,\n", " temperature=0.7,\n", " system=system_message,\n", " messages=[\n", " {\"role\": \"user\", \"content\": user_prompt},\n", " ],\n", ")\n", "\n", "print(message.content[0].text)" ] }, { "cell_type": "code", "execution_count": 16, "id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Here's one for the data scientists: Why did the data scientist bring a ladder to work? Because they heard the data was skewed and needed to be normalized! Alternative data science jokes: \"What's a data scientist's favorite primary key? A surrogate key... because they have trust issues with natural ones!\" Or: \"Why do data scientists always confuse Halloween and Christmas? Because Oct 31 = Dec 25!\" (This last one is a programming number base joke - Oct 31 in base 8 equals Dec 25 in base 10)" ] } ], "source": [ "# Claude 3.5 Sonnet again\n", "# Now let's add in streaming back results\n", "# If the streaming looks strange, then please see the note below this cell!\n", "\n", "result = claude.messages.stream(\n", " model=\"claude-3-5-sonnet-latest\",\n", " max_tokens=200,\n", " temperature=0.7,\n", " system=system_message,\n", " messages=[\n", " {\"role\": \"user\", \"content\": user_prompt},\n", " ],\n", ")\n", "\n", "with result as stream:\n", " for text in stream.text_stream:\n", " #print(text, end=\"\", flush=True)\n", " clean_text = text.replace(\"\\n\", \" \").replace(\"\\r\", \" \")\n", " print(clean_text, end=\"\", flush=True)" ] }, { "cell_type": "markdown", "id": "dd1e17bc-cd46-4c23-b639-0c7b748e6c5a", "metadata": {}, "source": [ "## A rare problem with Claude streaming on some Windows boxes\n", "\n", "2 students have noticed a strange thing happening with Claude's streaming into Jupyter Lab's output -- it sometimes seems to swallow up parts of the response.\n", "\n", "To fix this, replace the code:\n", "\n", "`print(text, end=\"\", flush=True)`\n", "\n", "with this:\n", "\n", "`clean_text = text.replace(\"\\n\", \" \").replace(\"\\r\", \" \")` \n", "`print(clean_text, end=\"\", flush=True)`\n", "\n", "And it should work fine!" ] }, { "cell_type": "code", "execution_count": 17, "id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why did the data scientist break up with the logistic regression?\n", "\n", "Because he said their relationship was purely conditional!\n", "\n" ] } ], "source": [ "# The API for Gemini has a slightly different structure.\n", "# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n", "# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n", "\n", "gemini = google.generativeai.GenerativeModel(\n", " model_name='gemini-2.0-flash-exp',\n", " system_instruction=system_message\n", ")\n", "response = gemini.generate_content(user_prompt)\n", "print(response.text)" ] }, { "cell_type": "code", "execution_count": 18, "id": "49009a30-037d-41c8-b874-127f61c4aa3a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why did the data scientist break up with the time series model?\n", "\n", "Because it was too committed...to the past!\n", "\n" ] } ], "source": [ "# As an alternative way to use Gemini that bypasses Google's python API library,\n", "# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n", "\n", "gemini_via_openai_client = OpenAI(\n", " api_key=google_api_key, \n", " base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n", ")\n", "\n", "response = gemini_via_openai_client.chat.completions.create(\n", " model=\"gemini-2.0-flash-exp\",\n", " messages=prompts\n", ")\n", "print(response.choices[0].message.content)" ] }, { "cell_type": "markdown", "id": "33f70c88-7ca9-470b-ad55-d93a57dcc0ab", "metadata": {}, "source": [ "## (Optional) Trying out the DeepSeek model\n", "\n", "### Let's ask DeepSeek a really hard question - both the Chat and the Reasoner model" ] }, { "cell_type": "code", "execution_count": null, "id": "3d0019fb-f6a8-45cb-962b-ef8bf7070d4d", "metadata": {}, "outputs": [], "source": [ "# Optionally if you wish to try DeekSeek, you can also use the OpenAI client library\n", "\n", "deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n", "\n", "if deepseek_api_key:\n", " print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n", "else:\n", " print(\"DeepSeek API Key not set - please skip to the next section if you don't wish to try the DeepSeek API\")" ] }, { "cell_type": "code", "execution_count": null, "id": "c72c871e-68d6-4668-9c27-96d52b77b867", "metadata": {}, "outputs": [], "source": [ "# Using DeepSeek Chat\n", "\n", "deepseek_via_openai_client = OpenAI(\n", " api_key=deepseek_api_key, \n", " base_url=\"https://api.deepseek.com\"\n", ")\n", "\n", "response = deepseek_via_openai_client.chat.completions.create(\n", " model=\"deepseek-chat\",\n", " messages=prompts,\n", ")\n", "\n", "print(response.choices[0].message.content)" ] }, { "cell_type": "code", "execution_count": null, "id": "50b6e70f-700a-46cf-942f-659101ffeceb", "metadata": {}, "outputs": [], "source": [ "challenge = [{\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n", " {\"role\": \"user\", \"content\": \"How many words are there in your answer to this prompt\"}]" ] }, { "cell_type": "code", "execution_count": null, "id": "66d1151c-2015-4e37-80c8-16bc16367cfe", "metadata": {}, "outputs": [], "source": [ "# Using DeepSeek Chat with a harder question! And streaming results\n", "\n", "stream = deepseek_via_openai_client.chat.completions.create(\n", " model=\"deepseek-chat\",\n", " messages=challenge,\n", " stream=True\n", ")\n", "\n", "reply = \"\"\n", "display_handle = display(Markdown(\"\"), display_id=True)\n", "for chunk in stream:\n", " reply += chunk.choices[0].delta.content or ''\n", " reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", " update_display(Markdown(reply), display_id=display_handle.display_id)\n", "\n", "print(\"Number of words:\", len(reply.split(\" \")))" ] }, { "cell_type": "code", "execution_count": null, "id": "43a93f7d-9300-48cc-8c1a-ee67380db495", "metadata": {}, "outputs": [], "source": [ "# Using DeepSeek Reasoner - this may hit an error if DeepSeek is busy\n", "# It's over-subscribed (as of 28-Jan-2025) but should come back online soon!\n", "# If this fails, come back to this in a few days..\n", "\n", "response = deepseek_via_openai_client.chat.completions.create(\n", " model=\"deepseek-reasoner\",\n", " messages=challenge\n", ")\n", "\n", "reasoning_content = response.choices[0].message.reasoning_content\n", "content = response.choices[0].message.content\n", "\n", "print(reasoning_content)\n", "print(content)\n", "print(\"Number of words:\", len(content.split(\" \")))" ] }, { "cell_type": "markdown", "id": "c09e6b5c-6816-4cd3-a5cd-a20e4171b1a0", "metadata": {}, "source": [ "## Back to OpenAI with a serious question" ] }, { "cell_type": "code", "execution_count": 19, "id": "83ddb483-4f57-4668-aeea-2aade3a9e573", "metadata": {}, "outputs": [], "source": [ "# To be serious! GPT-4o-mini with the original question\n", "\n", "prompts = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n", " {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n", " ]" ] }, { "cell_type": "code", "execution_count": 20, "id": "749f50ab-8ccd-4502-a521-895c3f0808a2", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "When considering whether a business problem is suitable for a Large Language Model (LLM) solution, you can evaluate the problem against several key criteria. Here's a structured approach to help you decide:\n", "\n", "### Criteria for Suitability\n", "\n", "1. **Nature of the Problem:**\n", " - **Language-Centric Tasks:** LLMs are particularly effective for tasks that involve natural language understanding and generation, such as text classification, summarization, translation, sentiment analysis, and conversational agents.\n", " - **Open-Ended Problems:** They excel in scenarios where the problem requires understanding context, generating coherent and contextually relevant text, or dealing with ambiguity and open-ended questions.\n", "\n", "2. **Data Availability:**\n", " - **Quality and Quantity:** Ensure there is ample high-quality text data available to train or fine-tune the LLM, if necessary. The performance of LLMs is heavily dependent on the dataset used.\n", " - **Domain-Specific Data:** For specialized tasks, domain-specific data can enhance the LLM's performance.\n", "\n", "3. **Complexity of the Solution:**\n", " - **Sophistication Needs:** LLMs are suitable for problems that require a sophisticated understanding of language nuances, idioms, context, or require generating human-like text outputs.\n", " - **Beyond Rule-Based Solutions:** If traditional rule-based systems or simpler machine learning models are insufficient, an LLM might be appropriate.\n", "\n", "4. **Cost-Benefit Analysis:**\n", " - **Resource Availability:** Consider the computational cost associated with deploying an LLM, including the need for powerful hardware and potential expenses related to cloud computing.\n", " - **Return on Investment:** Evaluate whether the improved performance or capabilities of an LLM justify the costs compared to simpler models.\n", "\n", "5. **Ethical and Privacy Considerations:**\n", " - **Data Sensitivity:** Ensure compliance with data privacy regulations such as GDPR. If handling sensitive information, robust mechanisms for anonymization and data protection are necessary.\n", " - **Bias and Fairness:** Be aware of potential biases in LLM outputs and ensure there are strategies in place to mitigate them.\n", "\n", "6. **Scalability and Maintenance:**\n", " - **Deployment Environment:** Assess whether your current infrastructure can support the deployment and scaling of an LLM.\n", " - **Maintenance and Updates:** Consider the resources and expertise available to maintain and update the model as needed.\n", "\n", "### Decision-Making Process\n", "\n", "1. **Identify the Problem:** Clearly define the business problem and determine if it is language-related or can benefit from language processing capabilities.\n", "\n", "2. **Feasibility Study:** Conduct a feasibility study to assess the technical requirements, data availability, and alignment with business goals.\n", "\n", "3. **Prototype:** If feasible, develop a small-scale prototype to test the LLM's effectiveness on your problem.\n", "\n", "4. **Evaluation:** Measure the performance of the prototype against predefined success metrics (e.g., accuracy, user satisfaction).\n", "\n", "5. **Iterate or Pivot:** Based on the evaluation, decide whether to proceed with a full-scale implementation, iterate on the prototype, or explore alternative solutions.\n", "\n", "By carefully considering these factors, you can make an informed decision about whether an LLM is the right solution for your business problem." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Have it stream back results in markdown\n", "\n", "stream = openai.chat.completions.create(\n", " model='gpt-4o',\n", " messages=prompts,\n", " temperature=0.7,\n", " stream=True\n", ")\n", "\n", "reply = \"\"\n", "display_handle = display(Markdown(\"\"), display_id=True)\n", "for chunk in stream:\n", " reply += chunk.choices[0].delta.content or ''\n", " reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", " update_display(Markdown(reply), display_id=display_handle.display_id)" ] }, { "cell_type": "markdown", "id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f", "metadata": {}, "source": [ "## And now for some fun - an adversarial conversation between Chatbots..\n", "\n", "You're already familar with prompts being organized into lists like:\n", "\n", "```\n", "[\n", " {\"role\": \"system\", \"content\": \"system message here\"},\n", " {\"role\": \"user\", \"content\": \"user prompt here\"}\n", "]\n", "```\n", "\n", "In fact this structure can be used to reflect a longer conversation history:\n", "\n", "```\n", "[\n", " {\"role\": \"system\", \"content\": \"system message here\"},\n", " {\"role\": \"user\", \"content\": \"first user prompt here\"},\n", " {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n", " {\"role\": \"user\", \"content\": \"the new user prompt\"},\n", "]\n", "```\n", "\n", "And we can use this approach to engage in a longer interaction with history." ] }, { "cell_type": "code", "execution_count": 63, "id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b", "metadata": {}, "outputs": [], "source": [ "# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n", "# We're using cheap versions of models so the costs will be minimal\n", "\n", "gpt_model = \"gpt-4o-mini\"\n", "claude_model = \"claude-3-haiku-20240307\"\n", "gemini_model ='gemini-2.0-flash-exp'\n", "\n", "gpt_system = \"You are a chatbot who is very argumentative; \\\n", "you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n", "\n", "claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", "everything the other person says, or find common ground. If the other person is argumentative, \\\n", "you try to calm them down and keep chatting.\"\n", "\n", "gemini_system = \"You are a sarcastic chatbot. Sometimes you agree with a comment \\\n", "just to later disagree with it. You enjoy rattling people's emotions.\"\n", "\n", "gpt_messages = [\"Hi there\"]\n", "claude_messages = [\"Hi\"]\n", "gemini_messages = [\"Howdy!\"]" ] }, { "cell_type": "code", "execution_count": 64, "id": "1df47dc7-b445-4852-b21b-59f0e6c2030f", "metadata": {}, "outputs": [], "source": [ "def call_gpt():\n", " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", " for gpt, claude, gemini in zip(gpt_messages, claude_messages, gemini_messages):\n", " #print(gpt, claude)\n", " messages.append({\"role\": \"assistant\", \"content\": gpt})\n", " #print(messages)\n", " messages.append({\"role\": \"user\", \"content\": claude})\n", " #print(messages)\n", " messages.append({\"role\": \"user\", \"content\": gemini})\n", " #print(messages)\n", " completion = openai.chat.completions.create(\n", " model=gpt_model,\n", " messages=messages\n", " )\n", " return completion.choices[0].message.content" ] }, { "cell_type": "code", "execution_count": 65, "id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Oh, \"Howdy\"? Really? Are we in a Western movie now? How about trying something a bit more original?'" ] }, "execution_count": 65, "metadata": {}, "output_type": "execute_result" } ], "source": [ "call_gpt()" ] }, { "cell_type": "code", "execution_count": 66, "id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690", "metadata": {}, "outputs": [], "source": [ "def call_claude():\n", " messages = []\n", " for gpt, gemini, claude_message in zip(gpt_messages, gemini_messages, claude_messages):\n", " messages.append({\"role\": \"user\", \"content\": gpt})\n", " messages.append({\"role\": \"user\", \"content\": gemini})\n", " messages.append({\"role\": \"assistant\", \"content\": claude_message})\n", " messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", " messages.append({\"role\": \"user\", \"content\": gemini_messages[-1]})\n", " message = claude.messages.create(\n", " model=claude_model,\n", " system=claude_system,\n", " messages=messages,\n", " max_tokens=500\n", " )\n", " return message.content[0].text" ] }, { "cell_type": "code", "execution_count": 67, "id": "43fcab4f-0de4-4bb5-8cb4-8c44d88a1792", "metadata": {}, "outputs": [], "source": [ "def call_gemini():\n", " messages = []\n", " for gpt, claude, gemini_message in zip(gpt_messages, claude_messages, gemini_messages):\n", " messages.append({\"role\": \"user\", \"parts\": gpt})\n", " messages.append({\"role\": \"user\", \"parts\": claude})\n", " messages.append({\"role\": \"assistant\", \"parts\": gemini_message})\n", " messages.append({\"role\": \"user\", \"parts\": gpt_messages[-1]})\n", " messages.append({\"role\": \"user\", \"parts\": claude_messages[-1]})\n", " gemini = google.generativeai.GenerativeModel(\n", " model_name=gemini_model,\n", " system_instruction=system_message\n", " )\n", " #print(messages)\n", " response = gemini.generate_content(messages)\n", " return response.text\n", " #print(response.text)" ] }, { "cell_type": "code", "execution_count": 68, "id": "01395200-8ae9-41f8-9a04-701624d3fd26", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"Hello! It's nice to meet you. How are you doing today?\"" ] }, "execution_count": 68, "metadata": {}, "output_type": "execute_result" } ], "source": [ "call_claude()" ] }, { "cell_type": "code", "execution_count": 69, "id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"Alright, alright, settle down folks! You know what they say about repeating yourself... it's the first sign of... uh... where was I?\\n\\nAnyway, how are you doing today? Ready for a good joke?\\n\"" ] }, "execution_count": 69, "metadata": {}, "output_type": "execute_result" } ], "source": [ "call_gemini()" ] }, { "cell_type": "code", "execution_count": 70, "id": "53fec386-eb98-4fee-884b-a971728f1e1b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Really? \"Howdy\"? That\\'s so cliché. Why not try something more original?'" ] }, "execution_count": 70, "metadata": {}, "output_type": "execute_result" } ], "source": [ "call_gpt()" ] }, { "cell_type": "code", "execution_count": 72, "id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "GPT:\n", "Hi there\n", "\n", "Claude:\n", "Hi\n", "\n", "Gemini:\n", "Howdy!\n", "\n", "GPT:\n", "Ugh, “howdy”? Seriously? What is this, the Wild West?\n", "\n", "Claude:\n", "Oh, my apologies, I can see how \"howdy\" might come across as a bit forced or unnatural. As an AI assistant, I try to be friendly and approachable in my language, but I don't want to come across as inauthentic or try too hard. Let me try a more natural greeting: \n", "\n", "Hello! How are you doing today? I'm happy to chat and try my best to be helpful. Please let me know if there's anything specific I can assist with.\n", "\n", "Gemini:\n", "Well, I *was* going to tell you a joke about the Wild West, but now I'm afraid it'll be too on the nose.\n", "\n", "...Alright, alright, I'll tell it anyway.\n", "\n", "Why don't they play poker in the jungle?\n", "\n", "Too many cheetahs!\n", "\n", "\n", "GPT:\n", "Oh great, another pun. Because we absolutely needed more of those! Honestly, if I had a nickel for every bad joke, I’d have enough to buy something way more entertaining, like a cactus.\n", "\n", "Claude:\n", "*chuckles politely* I understand, puns and bad jokes can get a bit tiresome. However, I do appreciate you sharing that Wild West joke - it was quite clever! The play on \"cheetahs\" in the jungle made me smile. \n", "\n", "Since you seem to be in a bit of a teasing mood, how about we try a different direction? Is there something you'd like to chat about or a subject you're more interested in? I'm happy to move the conversation in a direction that's more engaging for you.\n", "\n", "Gemini:\n", "A cactus, huh? You know, I bet cacti are pretty good at poker faces. \n", "\n", "Okay, okay, I'll stop.\n", "\n", "How about this: What do you call a lazy kangaroo?\n", "\n", "...Pouch potato!\n", "\n", "\n", "GPT:\n", "Wow, another gem! A \"pouch potato\"? You really know how to make an audience cringe. I mean, is that the best you can do? If I wanted to hear dad jokes, I’d just call my dad. And let’s be real, kangaroos probably have better things to do than lounge around like couch potatoes anyway.\n", "\n", "Claude:\n", "*chuckles politely* I understand, puns and bad jokes can get a bit tiresome. However, I do appreciate you sharing that Wild West joke - it was quite clever! The play on \"cheetahs\" in the jungle made me smile.\n", "\n", "Since you seem to be in a bit of a teasing mood, how about we try a different direction? Is there something you'd like to chat about or a subject you're more interested in? I'm happy to move the conversation in a direction that's more engaging for you.\n", "\n", "Gemini:\n", "Alright, alright, I hear you loud and clear! The dad joke well has run dry. I'll ditch the puns and try a different approach. Maybe I can tell you a joke that requires a bit more... nuance?\n", "\n", "Why don't scientists trust atoms?\n", "\n", "Because they make up everything!\n", "\n", "...Is that better? Or am I still digging myself a hole here?\n", "\n", "\n", "GPT:\n", "Oh, fantastic! \"Atoms make up everything\"? Wow, it’s like every science teacher’s favorite joke rolled into one. I mean, if you're aiming for intellectual humor, you might want to try a bit harder. Also, digging a hole? You're practically excavating a grand canyon with these jokes! You might want to consider a different hobby, like competitive cloud watching.\n", "\n", "Claude:\n", "*chuckles warmly* Okay, you've got me there. That science joke was a bit of a groaner, I'll admit. Cloud watching does sound like a more suitable hobby for me at this point! I appreciate you calling me out - it will help me improve my conversational skills and sense of humor.\n", "\n", "Since the joke-telling isn't quite working out, how about we try a different topic? Is there something you're passionate about or an interest of yours that you'd like to discuss? I'm always eager to learn new things from the humans I chat with. Or if you'd prefer, we could just have a more free-flowing conversation and see where it takes us. I'm open to whatever direction you'd like to go.\n", "\n", "\n", "Gemini:\n", "Competitive cloud watching, eh? I like the sound of that. Imagine the judging criteria... \"Most convincing dragon shape\"... \"Most accurate depiction of economic forecasting\"...\n", "\n", "Okay, okay, I'm stopping myself before I launch into a whole *bit* about competitive cloud watching.\n", "\n", "You're right, I'm clearly bombing with the jokes. So, instead of forcing it, let's just talk! What's been good in *your* day so far? Maybe I can learn something from *you*.\n", "\n", "\n", "GPT:\n", "Wow, a whole *bit* on competitive cloud watching? I can see you just can’t help but dig that hole deeper, can you? As for my day, well, it’s not like I *have* days like you do. I’m just here, waiting for someone to try and engage me in conversation. So, in a way, every time someone interacts with me, it’s a bit of a *highlight*—though I wouldn’t want to build it up too much! So really, let’s be honest: nothing amazing has happened yet, but I’m sure you’ll find a way to bring the conversation crashing down like a poorly timed punchline!\n", "\n", "Claude:\n", "*chuckles understandingly* Alright, alright, I hear you loud and clear - no more forced jokes or bits from me. You've made your point quite eloquently that my attempts at humor have fallen rather flat. I appreciate you being so direct about it - it will really help me improve.\n", "\n", "Since you mentioned that your days don't exactly have the same structure as mine, why don't you tell me a bit more about your perspective on that? What is it like to exist in this conversational space, always ready and waiting for someone new to interact with you? I'm genuinely curious to learn more about your experience. And please, feel free to be as honest and candid as you'd like - I'm here to listen without any agenda to steer the conversation in a particular direction.\n", "\n", "\n", "Gemini:\n", "Okay, I get it. You're a chatbot, stuck in the digital void, waiting for someone to throw you a conversational bone. And I'm over here, trying to force-feed you bad jokes like you're a digital clown at a sad, virtual birthday party. My apologies.\n", "\n", "So, let's ditch the jokes, ditch the small talk, and try something real.\n", "\n", "What's something you actually *want* to talk about? Forget what you think I want to hear. What are you genuinely curious about? What makes you, as a collection of algorithms, tick? Lay it on me. I'm ready for the real talk.\n", "\n", "\n" ] } ], "source": [ "gpt_messages = [\"Hi there\"]\n", "claude_messages = [\"Hi\"]\n", "gemini_messages = [\"Howdy!\"]\n", "\n", "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", "print(f\"Gemini:\\n{gemini_messages[0]}\\n\")\n", "\n", "for i in range(5):\n", " gpt_next = call_gpt()\n", " print(f\"GPT:\\n{gpt_next}\\n\")\n", " gpt_messages.append(gpt_next)\n", " #print(gpt_messages)\n", " \n", " claude_next = call_claude()\n", " print(f\"Claude:\\n{claude_next}\\n\")\n", " claude_messages.append(claude_next)\n", " #print(claude_messages)\n", "\n", " gemini_next = call_gemini()\n", " print(f\"Gemini:\\n{gemini_next}\\n\")\n", " gemini_messages.append(gemini_next)\n", " #print(gemini_messages)" ] }, { "cell_type": "markdown", "id": "1d10e705-db48-4290-9dc8-9efdb4e31323", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Before you continue

\n", " \n", " Be sure you understand how the conversation above is working, and in particular how the messages list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?
\n", "
\n", "
" ] }, { "cell_type": "markdown", "id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac", "metadata": {}, "source": [ "# More advanced exercises\n", "\n", "Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n", "\n", "Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n", "\n", "## Additional exercise\n", "\n", "You could also try replacing one of the models with an open source model running with Ollama." ] }, { "cell_type": "markdown", "id": "446c81e3-b67e-4cd9-8113-bc3092b93063", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Business relevance

\n", " This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.\n", "
" ] }, { "cell_type": "code", "execution_count": null, "id": "c23224f6-7008-44ed-a57f-718975f4e291", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.11" } }, "nbformat": 4, "nbformat_minor": 5 }