270 lines
8.5 KiB
Plaintext
270 lines
8.5 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0e5dc476-e3c9-49bd-934a-35dbe0d55b13",
|
||
"metadata": {},
|
||
"source": [
|
||
"# End of week 1 exercise (with user input(question, model)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "353fba18-a9b4-4ba8-be7e-f3e3c37521ff",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# imports\n",
|
||
"\n",
|
||
"import os\n",
|
||
"import requests\n",
|
||
"from dotenv import load_dotenv\n",
|
||
"from bs4 import BeautifulSoup\n",
|
||
"from IPython.display import Markdown, display, update_display\n",
|
||
"from openai import OpenAI\n",
|
||
"import ollama"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "be2b859d-b3d2-41f7-8666-28ecde26e3b8",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# set up environment and constants\n",
|
||
"load_dotenv(override=True)\n",
|
||
"api_key = os.getenv('OPENAI_API_KEY')\n",
|
||
"\n",
|
||
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
|
||
" print(\"API key looks good so far\")\n",
|
||
"else:\n",
|
||
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "c1b2b694-11a1-4d2a-8e34-d1fb02617fa3",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"system_prompt = \"You are an expert coder with educational skills for beginners. \\\n",
|
||
"You are able to explain, debbug or generate code in Python, R or bash, and to provide examples of use case if applicable. \\\n",
|
||
"Please add references to relevant sources if available. If not, do not invent.\\n\"\n",
|
||
"system_prompt += \"this is an example of a response:\"\n",
|
||
"system_prompt += \"\"\"\n",
|
||
"Sure! Here’s the explanation in plain text format, suitable for Markdown:\n",
|
||
"\n",
|
||
"# Explanation of the Code\n",
|
||
"\n",
|
||
"### Code:\n",
|
||
"```python\n",
|
||
"full_name = lambda first, last: f'Full name: {first.title()} {last.title()}'\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Explanation:\n",
|
||
"\n",
|
||
"1. **Lambda Function:**\n",
|
||
" - The keyword `lambda` is used to create a small, one-line anonymous function (a function without a name).\n",
|
||
" - It takes two parameters: `first` (for the first name) and `last` (for the last name).\n",
|
||
"\n",
|
||
"2. **String Formatting (`f-string`):**\n",
|
||
" - `f'Full name: {first.title()} {last.title()}'` is a formatted string (f-string).\n",
|
||
" - It inserts the values of `first` and `last` into the string while applying `.title()` to capitalize the first letter of each name.\n",
|
||
"\n",
|
||
"3. **Assigning the Function:**\n",
|
||
" - The lambda function is assigned to the variable `full_name`, so we can use `full_name()` like a regular function.\n",
|
||
"\n",
|
||
"### How to Use It:\n",
|
||
"Now, let’s call this function and see what it does.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"print(full_name(\"john\", \"doe\"))\n",
|
||
"```\n",
|
||
"\n",
|
||
"#### Output:\n",
|
||
"```\n",
|
||
"Full name: John Doe\n",
|
||
"```\n",
|
||
"\n",
|
||
"### What Happens:\n",
|
||
"- `\"john\"` becomes `\"John\"` (because `.title()` capitalizes the first letter).\n",
|
||
"- `\"doe\"` becomes `\"Doe\"`.\n",
|
||
"- The output is `\"Full name: John Doe\"`.\n",
|
||
"\n",
|
||
"### Summary:\n",
|
||
"This is a simple way to create a function that formats a full name while ensuring proper capitalization. You could write the same function using `def` like this:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"def full_name(first, last):\n",
|
||
" return f'Full name: {first.title()} {last.title()}'\n",
|
||
"```\n",
|
||
"\n",
|
||
"Both versions work the same way, but the `lambda` version is more compact.\n",
|
||
"\n",
|
||
"### Reference(s):\n",
|
||
"To deepen your understanding of the code snippet involving Python's lambda functions here is a resource you might find helpful:\n",
|
||
"\n",
|
||
"Ref. **Python Lambda Functions:**\n",
|
||
" - The official Python documentation provides an in-depth explanation of lambda expressions, including their syntax and use cases.\n",
|
||
" - [Lambda Expressions](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions)\n",
|
||
"\n",
|
||
"```\n",
|
||
"You can copy and paste this into any Markdown file or viewer. Let me know if you need further modifications! 😊\n",
|
||
"\"\"\""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "f7225ab0-5ade-4c93-839c-3c80b0b23c37",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# display(Markdown(system_prompt))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "07fa2506-4b24-4a53-9f3f-500b4cbcb10a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# user question\n",
|
||
"default_question= \"\"\"\n",
|
||
"Please explain what this code does and why:\n",
|
||
"yield from {book.get('author') from book in books if book.get('author')}\n",
|
||
"\"\"\"\n",
|
||
"user_question= str(input(\"What code do you want me to explain?/n(Press 'Enter' for an example)\"))\n",
|
||
"\n",
|
||
"if user_question== '':\n",
|
||
" question= default_question\n",
|
||
" print(default_question)\n",
|
||
"else:\n",
|
||
" question= \"Please explain what this code does and why:\\n\" + user_question"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "a6749065-fb8a-4f9f-8297-3cd33abd97bd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"print(question)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "f48df06c-edb7-4a05-9e56-910854dad0c7",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# user model\n",
|
||
"model_number= input(\"\"\"\n",
|
||
"Please enter the number of the model you want to use from the list below:\n",
|
||
"1 GPT-4o Mini\n",
|
||
"2 Llama 3.2\n",
|
||
"3 DeepSeek R1\n",
|
||
"4 Qwen 2.5\n",
|
||
"\"\"\")\n",
|
||
"try:\n",
|
||
" if int(model_number)==1:\n",
|
||
" model= 'gpt-4o-mini'\n",
|
||
" elif int(model_number)==2:\n",
|
||
" model= 'llama3.2'\n",
|
||
" elif int(model_number)==3:\n",
|
||
" model= 'deepseek-r1:1.5b'\n",
|
||
" elif int(model_number)==4:\n",
|
||
" model= 'qwen2.5:3b'\n",
|
||
" else:\n",
|
||
" model= ''\n",
|
||
" print(\"please provide only a number from the list\")\n",
|
||
"except:\n",
|
||
" model=''\n",
|
||
" print(\"Please provide a number or press 'Enter' to finish\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "aeb6e4e5-fb63-4192-bb74-0b015dfedfb7",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# print(model)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "fffa6021-d3f8-4855-a694-bed6d651791f",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"messages=[\n",
|
||
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
||
" {\"role\": \"user\", \"content\": question}\n",
|
||
" ]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "835374a4-3df5-4f28-82e3-6bc70514df16",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"if int(model_number)==1:\n",
|
||
" openai= OpenAI()\n",
|
||
" stream = openai.chat.completions.create(\n",
|
||
" model=model,\n",
|
||
" messages=messages,\n",
|
||
" stream= True\n",
|
||
" )\n",
|
||
"\n",
|
||
" response = \"\"\n",
|
||
" print(\"The following answer will be generated by {0} LLM\".format(model))\n",
|
||
" display_handle = display(Markdown(\"\"), display_id=True)\n",
|
||
" for chunk in stream:\n",
|
||
" response += chunk.choices[0].delta.content or ''\n",
|
||
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
|
||
" update_display(Markdown(response), display_id=display_handle.display_id)\n",
|
||
"elif int(model_number)==2 or 3 or 4:\n",
|
||
" !ollama pull {model}\n",
|
||
" print(\"\\n\\nThe following answer will be generated by {0} LLM\\n\\n\".format(model))\n",
|
||
" response = ollama.chat(\n",
|
||
" model=model,\n",
|
||
" messages = messages)\n",
|
||
" result= response['message']['content']\n",
|
||
" display(Markdown(result))"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.11.11"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|