Updated explanations and tips
This commit is contained in:
@@ -14,6 +14,31 @@
|
||||
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n",
|
||||
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
|
||||
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
|
||||
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
|
||||
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
|
||||
" <code>conda env update --f environment.yml --prune</code><br/>\n",
|
||||
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
|
||||
" <code>pip install -r requirements.txt</code>\n",
|
||||
" </span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
|
||||
@@ -465,30 +490,64 @@
|
||||
" claude_messages.append(claude_next)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#900;\">Before you continue</h2>\n",
|
||||
" <span style=\"color:#900;\">\n",
|
||||
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n",
|
||||
" </span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# See the community-contributions folder\n",
|
||||
"# More advanced exercises\n",
|
||||
"\n",
|
||||
"For a great variation with a 3-way bringing Gemini into the conversation!\n",
|
||||
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
|
||||
"\n",
|
||||
"Try doing this yourself before you look in the folder.\n",
|
||||
"Try doing this yourself before you look at the solutions.\n",
|
||||
"\n",
|
||||
"## Additional exercise\n",
|
||||
"\n",
|
||||
"Try adding in an Ollama model in to the conversation.\n",
|
||||
"\n",
|
||||
"## Business relevance\n",
|
||||
"\n",
|
||||
"This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business."
|
||||
"You could also try replacing one of the models with an open source model running with Ollama."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#181;\">Business relevance</h2>\n",
|
||||
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0d86790a-3a6f-4b18-ab0a-bc6107945a27",
|
||||
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
|
||||
@@ -172,6 +172,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# The simplicty of gradio. This might appear in \"light mode\" - I'll show you how to make this in dark mode later.\n",
|
||||
"\n",
|
||||
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\").launch()"
|
||||
]
|
||||
},
|
||||
@@ -182,9 +184,55 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Adding share=True means that it can be accessed publically\n",
|
||||
"# A more permanent hosting is available using a platform called Spaces from HuggingFace, which we will touch on next week\n",
|
||||
"\n",
|
||||
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(share=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cd87533a-ff3a-4188-8998-5bedd5ba2da3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Adding inbrowser=True opens up a new browser window automatically\n",
|
||||
"\n",
|
||||
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(inbrowser=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b42ec007-0314-48bf-84a4-a65943649215",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Forcing dark mode\n",
|
||||
"\n",
|
||||
"Gradio appears in light mode or dark mode depending on the settings of the browser and computer. There is a way to force gradio to appear in dark mode, but Gradio recommends against this as it should be a user preference (particularly for accessibility reasons). But if you wish to force dark mode for your screens, below is how to do it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e8129afa-532b-4b15-b93c-aa9cca23a546",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Define this variable and then pass js=force_dark_mode when creating the Interface\n",
|
||||
"\n",
|
||||
"force_dark_mode = \"\"\"\n",
|
||||
"function refresh() {\n",
|
||||
" const url = new URL(window.location);\n",
|
||||
" if (url.searchParams.get('__theme') !== 'dark') {\n",
|
||||
" url.searchParams.set('__theme', 'dark');\n",
|
||||
" window.location.href = url.href;\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"\"\"\"\n",
|
||||
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\", js=force_dark_mode).launch()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -192,6 +240,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Inputs and Outputs\n",
|
||||
"\n",
|
||||
"view = gr.Interface(\n",
|
||||
" fn=shout,\n",
|
||||
" inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n",
|
||||
@@ -208,6 +258,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# And now - changing the function from \"shout\" to \"message_gpt\"\n",
|
||||
"\n",
|
||||
"view = gr.Interface(\n",
|
||||
" fn=message_gpt,\n",
|
||||
" inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n",
|
||||
@@ -224,6 +276,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Let's use Markdown\n",
|
||||
"# Are you wondering why it makes any difference to set system_message when it's not referred to in the code below it?\n",
|
||||
"# I'm taking advantage of system_message being a global variable, used back in the message_gpt function (go take a look)\n",
|
||||
"# Not a great software engineering practice, but quite sommon during Jupyter Lab R&D!\n",
|
||||
"\n",
|
||||
"system_message = \"You are a helpful assistant that responds in markdown\"\n",
|
||||
"\n",
|
||||
"view = gr.Interface(\n",
|
||||
@@ -243,6 +300,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Let's create a call that streams back results\n",
|
||||
"# If you'd like a refresher on Generators (the \"yield\" keyword),\n",
|
||||
"# Please take a look at the Intermediate Python notebook in week1 folder.\n",
|
||||
"\n",
|
||||
"def stream_gpt(prompt):\n",
|
||||
" messages = [\n",
|
||||
@@ -334,7 +393,9 @@
|
||||
"\n",
|
||||
"There's actually a more elegant way to achieve this (which Python people might call more 'Pythonic'):\n",
|
||||
"\n",
|
||||
"`yield from result`"
|
||||
"`yield from result`\n",
|
||||
"\n",
|
||||
"I cover this in more detail in the Intermediate Python notebook in the week1 folder - take a look if you'd like more."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -380,6 +441,26 @@
|
||||
"Now you know how - it's simple!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "92d7c49b-2e0e-45b3-92ce-93ca9f962ef4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#900;\">Before you read the next few cells</h2>\n",
|
||||
" <span style=\"color:#900;\">\n",
|
||||
" Try to do this yourself - go back to the company brochure in week1, day5 and add a Gradio UI to the end. Then come and look at the solution.\n",
|
||||
" </span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
|
||||
@@ -256,11 +256,19 @@
|
||||
"id": "82a57ee0-b945-48a7-a024-01b56a5d4b3e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Business Applications\n",
|
||||
"\n",
|
||||
"Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n",
|
||||
"\n",
|
||||
"Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM."
|
||||
"<table style=\"margin: 0; text-align: left;\">\n",
|
||||
" <tr>\n",
|
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
|
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
" <h2 style=\"color:#181;\">Business Applications</h2>\n",
|
||||
" <span style=\"color:#181;\">Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n",
|
||||
"<br/><br/>\n",
|
||||
"Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM.</span>\n",
|
||||
" </td>\n",
|
||||
" </tr>\n",
|
||||
"</table>"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user