168 lines
4.4 KiB
Plaintext
168 lines
4.4 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
|
|
"metadata": {},
|
|
"source": [
|
|
"# HOMEWORK EXERCISE ASSIGNMENT\n",
|
|
"\n",
|
|
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n",
|
|
"\n",
|
|
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n",
|
|
"\n",
|
|
"**Benefits:**\n",
|
|
"1. No API charges - open-source\n",
|
|
"2. Data doesn't leave your box\n",
|
|
"\n",
|
|
"**Disadvantages:**\n",
|
|
"1. Significantly less power than Frontier Model\n",
|
|
"\n",
|
|
"## Recap on installation of Ollama\n",
|
|
"\n",
|
|
"Simply visit [ollama.com](https://ollama.com) and install!\n",
|
|
"\n",
|
|
"Once complete, the ollama server should already be running locally. \n",
|
|
"If you visit: \n",
|
|
"[http://localhost:11434/](http://localhost:11434/)\n",
|
|
"\n",
|
|
"You should see the message `Ollama is running`. \n",
|
|
"\n",
|
|
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n",
|
|
"Then try [http://localhost:11434/](http://localhost:11434/) again."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# imports\n",
|
|
"\n",
|
|
"import requests\n",
|
|
"from bs4 import BeautifulSoup\n",
|
|
"from IPython.display import Markdown, display"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Constants\n",
|
|
"\n",
|
|
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
|
|
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
|
|
"MODEL = \"llama3.2\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "dac0a679-599c-441f-9bf2-ddc73d35b940",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Create a messages list using the same format that we used for OpenAI\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "7bb9c624-14f0-4945-a719-8ddb64f66f47",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"payload = {\n",
|
|
" \"model\": MODEL,\n",
|
|
" \"messages\": messages,\n",
|
|
" \"stream\": False\n",
|
|
" }"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
|
|
"print(response.json()['message']['content'])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Introducing the ollama package\n",
|
|
"\n",
|
|
"And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n",
|
|
"\n",
|
|
"Under the hood, it's making the same call as above to the ollama server running at localhost:11434"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import ollama\n",
|
|
"\n",
|
|
"response = ollama.chat(model=MODEL, messages=messages)\n",
|
|
"print(response['message']['content'])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "9a611b05-b5b0-4c83-b82d-b3a39ffb917d",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898",
|
|
"metadata": {},
|
|
"source": [
|
|
"# NOW the exercise for you\n",
|
|
"\n",
|
|
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.10"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|