Minor updates including pinning datasets version
This commit is contained in:
@@ -290,12 +290,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# If you have access to this, here is the reasoning model o3-mini\n",
|
||||
"# If you have access to this, here is the reasoning model o4-mini\n",
|
||||
"# This is trained to think through its response before replying\n",
|
||||
"# So it will take longer but the answer should be more reasoned - not that this helps..\n",
|
||||
"\n",
|
||||
"completion = openai.chat.completions.create(\n",
|
||||
" model='o3-mini',\n",
|
||||
" model='o4-mini',\n",
|
||||
" messages=prompts\n",
|
||||
")\n",
|
||||
"print(completion.choices[0].message.content)"
|
||||
@@ -308,12 +308,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Claude 3.7 Sonnet\n",
|
||||
"# Claude 4.0 Sonnet\n",
|
||||
"# API needs system message provided separately from user prompt\n",
|
||||
"# Also adding max_tokens\n",
|
||||
"\n",
|
||||
"message = claude.messages.create(\n",
|
||||
" model=\"claude-3-7-sonnet-latest\",\n",
|
||||
" model=\"claude-sonnet-4-20250514\",\n",
|
||||
" max_tokens=200,\n",
|
||||
" temperature=0.7,\n",
|
||||
" system=system_message,\n",
|
||||
@@ -332,12 +332,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Claude 3.7 Sonnet again\n",
|
||||
"# Claude 4.0 Sonnet again\n",
|
||||
"# Now let's add in streaming back results\n",
|
||||
"# If the streaming looks strange, then please see the note below this cell!\n",
|
||||
"\n",
|
||||
"result = claude.messages.stream(\n",
|
||||
" model=\"claude-3-7-sonnet-latest\",\n",
|
||||
" model=\"claude-sonnet-4-20250514\",\n",
|
||||
" max_tokens=200,\n",
|
||||
" temperature=0.7,\n",
|
||||
" system=system_message,\n",
|
||||
@@ -408,12 +408,28 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"response = gemini_via_openai_client.chat.completions.create(\n",
|
||||
" model=\"gemini-2.5-flash-preview-04-17\",\n",
|
||||
" model=\"gemini-2.5-flash\",\n",
|
||||
" messages=prompts\n",
|
||||
")\n",
|
||||
"print(response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "492f0ff2-8581-4836-bf00-37fddbe120eb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Sidenote:\n",
|
||||
"\n",
|
||||
"This alternative approach of using the client library from OpenAI to connect with other models has become extremely popular in recent months.\n",
|
||||
"\n",
|
||||
"So much so, that all the models now support this approach - including Anthropic.\n",
|
||||
"\n",
|
||||
"You can read more about this approach, with 4 examples, in the first section of this guide:\n",
|
||||
"\n",
|
||||
"https://github.com/ed-donner/agents/blob/main/guides/09_ai_apis_and_ollama.ipynb"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "33f70c88-7ca9-470b-ad55-d93a57dcc0ab",
|
||||
@@ -583,7 +599,7 @@
|
||||
"# Have it stream back results in markdown\n",
|
||||
"\n",
|
||||
"stream = openai.chat.completions.create(\n",
|
||||
" model='gpt-4o-mini',\n",
|
||||
" model='gpt-4.1-mini',\n",
|
||||
" messages=prompts,\n",
|
||||
" temperature=0.7,\n",
|
||||
" stream=True\n",
|
||||
@@ -634,11 +650,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
|
||||
"# Let's make a conversation between GPT-4.1-mini and Claude-3.5-haiku\n",
|
||||
"# We're using cheap versions of models so the costs will be minimal\n",
|
||||
"\n",
|
||||
"gpt_model = \"gpt-4o-mini\"\n",
|
||||
"claude_model = \"claude-3-haiku-20240307\"\n",
|
||||
"gpt_model = \"gpt-4.1-mini\"\n",
|
||||
"claude_model = \"claude-3-5-haiku-latest\"\n",
|
||||
"\n",
|
||||
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
|
||||
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
|
||||
@@ -774,6 +790,19 @@
|
||||
"\n",
|
||||
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
|
||||
"\n",
|
||||
"The most reliable way to do this involves thinking a bit differently about your prompts: just 1 system prompt and 1 user prompt each time, and in the user prompt list the full conversation so far.\n",
|
||||
"\n",
|
||||
"Something like:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"user_prompt = f\"\"\"\n",
|
||||
" You are Alex, in conversation with Blake and Charlie.\n",
|
||||
" The conversation so far is as follows:\n",
|
||||
" {conversation}\n",
|
||||
" Now with this, respond with what you would like to say next, as Alex.\n",
|
||||
" \"\"\"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n",
|
||||
"\n",
|
||||
"## Additional exercise\n",
|
||||
@@ -824,7 +853,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.12"
|
||||
"version": "3.11.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -568,7 +568,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.12"
|
||||
"version": "3.11.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -301,7 +301,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.11"
|
||||
"version": "3.11.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
Reference in New Issue
Block a user