More comments, notes and improvements

This commit is contained in:
Edward Donner
2024-10-29 19:10:07 -04:00
parent 90d9dde3de
commit b51953eb12
6 changed files with 73 additions and 22 deletions

View File

@@ -23,6 +23,8 @@
"\n",
"If you haven't done so already, you'll need to create API keys from OpenAI, Anthropic and Google.\n",
"\n",
"**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course.\n",
"\n",
"For OpenAI, visit https://openai.com/api/ \n",
"For Anthropic, visit https://console.anthropic.com/ \n",
"For Google, visit https://ai.google.dev/gemini-api \n",
@@ -37,7 +39,9 @@
"GOOGLE_API_KEY=xxxx\n",
"```\n",
"\n",
"OR enter the keys directly in the cells below."
"OR enter the keys directly in the cells below.\n",
"\n",
"You may need to restart the Jupyter Notebook Kernel (the Python process) via the Kernel menu."
]
},
{
@@ -259,8 +263,8 @@
"# To be serious! GPT-4o-mini with the original question\n",
"\n",
"prompts = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n",
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution?\"}\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n",
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n",
" ]"
]
},

View File

@@ -164,7 +164,7 @@
"metadata": {},
"outputs": [],
"source": [
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", allow_flagging=\"never\").launch(share=True)"
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(share=True)"
]
},
{
@@ -178,7 +178,7 @@
" fn=shout,\n",
" inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n",
" outputs=[gr.Textbox(label=\"Response:\", lines=8)],\n",
" allow_flagging=\"never\"\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
@@ -194,7 +194,7 @@
" fn=message_gpt,\n",
" inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n",
" outputs=[gr.Textbox(label=\"Response:\", lines=8)],\n",
" allow_flagging=\"never\"\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
@@ -212,7 +212,7 @@
" fn=message_gpt,\n",
" inputs=[gr.Textbox(label=\"Your message:\")],\n",
" outputs=[gr.Markdown(label=\"Response:\")],\n",
" allow_flagging=\"never\"\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
@@ -253,7 +253,7 @@
" fn=stream_gpt,\n",
" inputs=[gr.Textbox(label=\"Your message:\")],\n",
" outputs=[gr.Markdown(label=\"Response:\")],\n",
" allow_flagging=\"never\"\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
@@ -293,11 +293,32 @@
" fn=stream_claude,\n",
" inputs=[gr.Textbox(label=\"Your message:\")],\n",
" outputs=[gr.Markdown(label=\"Response:\")],\n",
" allow_flagging=\"never\"\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
},
{
"cell_type": "markdown",
"id": "bc5a70b9-2afe-4a7c-9bed-2429229e021b",
"metadata": {},
"source": [
"## Minor improvement\n",
"\n",
"I've made a small improvement to this code.\n",
"\n",
"Previously, it had these lines:\n",
"\n",
"```\n",
"for chunk in result:\n",
" yield chunk\n",
"```\n",
"\n",
"There's actually a more elegant way to achieve this (which Python people might call more 'Pythonic'):\n",
"\n",
"`yield from result`"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -312,8 +333,7 @@
" result = stream_claude(prompt)\n",
" else:\n",
" raise ValueError(\"Unknown model\")\n",
" for chunk in result:\n",
" yield chunk"
" yield from result"
]
},
{
@@ -327,7 +347,7 @@
" fn=stream_model,\n",
" inputs=[gr.Textbox(label=\"Your message:\"), gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\")],\n",
" outputs=[gr.Markdown(label=\"Response:\")],\n",
" allow_flagging=\"never\"\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
@@ -397,8 +417,7 @@
" result = stream_claude(prompt)\n",
" else:\n",
" raise ValueError(\"Unknown model\")\n",
" for chunk in result:\n",
" yield chunk"
" yield from result"
]
},
{
@@ -412,10 +431,10 @@
" fn=stream_brochure,\n",
" inputs=[\n",
" gr.Textbox(label=\"Company name:\"),\n",
" gr.Textbox(label=\"Landing page URL:\"),\n",
" gr.Textbox(label=\"Landing page URL including http:// or https://\"),\n",
" gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\")],\n",
" outputs=[gr.Markdown(label=\"Brochure:\")],\n",
" allow_flagging=\"never\"\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
@@ -427,6 +446,14 @@
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "d3d3bf11-e02c-492b-96f1-f4dd7df6f4d7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {