Simplified based on latest Gradio, and added PC fix for audio playback
This commit is contained in:
@@ -66,7 +66,13 @@
|
||||
"id": "98e97227-f162-4d1a-a0b2-345ff248cbe7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Reminder of the structure of prompt messages to OpenAI:\n",
|
||||
"# Please read this! A change from the video:\n",
|
||||
"\n",
|
||||
"In the video, I explain how we now need to write a function called:\n",
|
||||
"\n",
|
||||
"`chat(message, history)`\n",
|
||||
"\n",
|
||||
"Which expects to receive `history` in a particular format, which we need to map to the OpenAO format before we call OpenAI:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"[\n",
|
||||
@@ -77,18 +83,15 @@
|
||||
"]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"We will write a function `chat(message, history)` where:\n",
|
||||
"**message** is the prompt to use\n",
|
||||
"**history** is a list of pairs of user message with assistant's reply\n",
|
||||
"But Gradio has been upgraded! Now it will pass in `history` in the exact OpenAI format, perfect for us to send straight to OpenAI.\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"[\n",
|
||||
" [\"user said this\", \"assistant replied\"],\n",
|
||||
" [\"then user said this\", \"and assistant replied again],\n",
|
||||
" ...\n",
|
||||
"]\n",
|
||||
"```\n",
|
||||
"We will convert this history into the prompt style for OpenAI, then call OpenAI. "
|
||||
"So our work just got easier!\n",
|
||||
"\n",
|
||||
"We will write a function `chat(message, history)` where: \n",
|
||||
"**message** is the prompt to use \n",
|
||||
"**history** is the past conversation, in OpenAI format \n",
|
||||
"\n",
|
||||
"We will combine the system message, history and latest message, then call OpenAI."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -98,12 +101,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Simpler than in my video - we can easily create this function that calls OpenAI\n",
|
||||
"# It's just 1 line of code to prepare the input to OpenAI!\n",
|
||||
"\n",
|
||||
"def chat(message, history):\n",
|
||||
" messages = [{\"role\": \"system\", \"content\": system_message}]\n",
|
||||
" for user_message, assistant_message in history:\n",
|
||||
" messages.append({\"role\": \"user\", \"content\": user_message})\n",
|
||||
" messages.append({\"role\": \"assistant\", \"content\": assistant_message})\n",
|
||||
" messages.append({\"role\": \"user\", \"content\": message})\n",
|
||||
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
|
||||
"\n",
|
||||
" print(\"History is:\")\n",
|
||||
" print(history)\n",
|
||||
@@ -133,7 +135,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"gr.ChatInterface(fn=chat).launch()"
|
||||
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -158,11 +160,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def chat(message, history):\n",
|
||||
" messages = [{\"role\": \"system\", \"content\": system_message}]\n",
|
||||
" for user_message, assistant_message in history:\n",
|
||||
" messages.append({\"role\": \"user\", \"content\": user_message})\n",
|
||||
" messages.append({\"role\": \"assistant\", \"content\": assistant_message})\n",
|
||||
" messages.append({\"role\": \"user\", \"content\": message})\n",
|
||||
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
|
||||
"\n",
|
||||
" stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n",
|
||||
"\n",
|
||||
@@ -179,7 +177,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"gr.ChatInterface(fn=chat).launch()"
|
||||
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -200,7 +198,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"gr.ChatInterface(fn=chat).launch()"
|
||||
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -211,10 +209,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def chat(message, history):\n",
|
||||
" messages = [{\"role\": \"system\", \"content\": system_message}]\n",
|
||||
" for user_message, assistant_message in history:\n",
|
||||
" messages.append({\"role\": \"user\", \"content\": user_message})\n",
|
||||
" messages.append({\"role\": \"assistant\", \"content\": assistant_message})\n",
|
||||
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
|
||||
"\n",
|
||||
" if 'belt' in message:\n",
|
||||
" messages.append({\"role\": \"system\", \"content\": \"For added context, the store does not sell belts, \\\n",
|
||||
@@ -237,13 +232,25 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"gr.ChatInterface(fn=chat).launch()"
|
||||
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "82a57ee0-b945-48a7-a024-01b56a5d4b3e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Business Applications\n",
|
||||
"\n",
|
||||
"Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n",
|
||||
"\n",
|
||||
"Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "887fd6c1-2db0-4dc4-bc53-49399af8e035",
|
||||
"id": "6dfb9e21-df67-4c2b-b952-5e7e7961b03d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
|
||||
Reference in New Issue
Block a user