Launching refreshed version of LLM Engineering weeks 1-4 - see README

This commit is contained in:
Edward Donner
2025-10-11 15:58:39 -04:00
parent 3286cfb395
commit c7257b9ae6
68 changed files with 16583 additions and 3756 deletions

68
guides/01_intro.ipynb Normal file
View File

@@ -0,0 +1,68 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Welcome to the Guides!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"I've designed this course to be suitable for a wide range of backgrounds. For those of you relatively new to this, I've prepared some technical briefings to build your expertise.\n",
"\n",
"These are designed to be self-study; you work through them at your own pace, investigating and experimenting.\n",
"\n",
"I've heavily taken advantage of our AI friends to write some guides, and I've tried to frame them so they're as useful as possible and relevant to the course.\n",
"\n",
"There's only one requirement for the course: plenty of patience! Keep in mind that one of the best ways to learn is by solving problems - if you feel frustrated with a challenging puzzle, remember that this is where the learning happens! And, get in touch if I can help."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Contents (select from the sub-directory in the explorer on the left)\n",
"\n",
"1. Intro - this contents\n",
"2. The command line\n",
"3. Git and Github\n",
"4. Technical foundations (environment variables, networks, APIs, uv)\n",
"5. Notebooks\n",
"6. Python foundations\n",
"7. \"Vibe coding\" - successfully coding with the help of LLMs\n",
"8. Debugging techniques\n",
"9. APIs and Ollama\n",
"10. Intermediate level python, including decorators and async \n",
"11. Asynchronous Python\n",
"12. Starting your project - 3 crucial pieces of advice\n",
"13. Frontend Crash Course\n",
"14. Briefings on Docker and Terraform\n",
"\n",
"### Also see the community_contributions directory for some awesome Python cookbooks contributed by students! (Thank you!)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.12.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,63 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The command line"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Welcome to the guide to build your confidence working at the command line.\n",
"\n",
"By the end of this guide, you should be able to confidently work at the command line, creating directories, moving files and more!\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## For Windows PC users\n",
"\n",
"Please see this briefing:\n",
"\n",
"https://chatgpt.com/share/68064acf-4d3c-8012-86a1-fb09a7c6f923"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## For Mac users\n",
"\n",
"Please see this briefing:\n",
"\n",
"https://chatgpt.com/share/68064bfe-662c-8012-8073-479c32595459"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please do experiment to build your confidence.\n",
"\n",
"ChatGPT, Claude, Gemini, DeepSeek and others are your friends! They are excellent at explaining any command line commands in detail. This will quickly become second nature."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,81 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Git and Github"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This guide is all about using source code control: Git and Github.\n",
"\n",
"By the end of this, you should be confident with every day code control processes, including fetching the latest code and submitting a PR to merge your own changes."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Foundational briefing\n",
"\n",
"Here is Git and Github for a PC or Mac audience:\n",
"\n",
"https://chatgpt.com/share/68061486-08b8-8012-97bc-3264ad5ebcd4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pulling latest code\n",
"\n",
"I regularly add improvements to the course with new examples, exercises and materials.\n",
"\n",
"Here are instructions for how to bring in the latest - the easy way, and the rigorous way!\n",
"\n",
"https://chatgpt.com/share/6806178b-0700-8012-836f-7e87b2670b7b"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Contributing your changes to the repo to share your contributions with others\n",
"\n",
"Here are step by step instructions for submitting a PR.\n",
"\n",
"I'd be so grateful to include your contributions. It adds value for all other students, and I love to see it myself! As an added benefit, you get recognition in Github as a contributor to the repo. As this course involves building entire repos, I'd suggest that you create a Markdown file or Jupyter Notebook that links to your repo, and include it in community_contributions with a PR.\n",
"\n",
"Here are detailed instructions and explanations:\n",
"\n",
"https://chatgpt.com/share/6873c22b-2a1c-8012-bc9a-debdcf7c835b\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"source": [
"### If you'd like to become a Git pro\n",
"\n",
"If you want to go deep on using Git, here is a brilliant guide. Read this and you will know much more than me!\n",
"\n",
"https://beej.us/guide/bggit/\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,117 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Technical Foundations\n",
"\n",
"It's crucial that you feel comfortable with the basic technical concepts that we work with. This will make your experience of the entire course so much better - it can be very frustrating if you're not sure what's gong on.\n",
"\n",
"These guides should build confidence in the underlying technologies we work with."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Topic 1: ChatGPT versus OpenAI API\n",
"\n",
"### Whats the difference between ChatGPT and the GPT API, both offered by OpenAI?\n",
"\n",
"#### ChatGPT is an end-user tool. Its a Chat product designed for consumers who are AI users.\n",
"- It has a free plan, and it also has paid subscription plans with more features.\n",
"- The subscription plans give the user near-unlimited access to use the Chat product.\n",
"\n",
"#### The API is a service provided for AI engineers - software engineers and data scientists - working on other commercial products.\n",
"- It allows technical people, like you and me, to access the underlying models (like “GPT4.1” and “o3”) so that we can build our own products.\n",
"- If we wanted to, we could build our own version of ChatGPT using the API, and charge our end-users for it.\n",
"- Like most APIs, OpenAI charges a small amount based on API usage. For most examples on the course using gpt-4o-mini, its of the order of $0.001 per API call.\n",
"\n",
"### Im paying $20/month for ChatGPT - why do I need to pay more for the API?\n",
"\n",
"- Hopefully this is now clear. The API is not for consumers; its for engineers to build their own platforms that they can charge for.\n",
"- If you were to have access to the API based on your subscription, then you could offer ChatGPT tools to others at a cheaper price, and put OpenAI out of business!\n",
"- Keep in mind: each API call may require 10,000,000,000,000 floating point calculations - that compute uses electricity!\n",
"\n",
"Instead of calling the API, you can run open source models locally, but typically they have 1,000 times fewer calculations — and even though its tiny, that processing still hits your electricity bill.."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Topic 2: Taking a Screenshot\n",
"\n",
"You may already be familiar with \"taking a screenshot\" on your computer, but if not (or if you think this means taking a photo with your camera..), please review this tutorial:\n",
"\n",
"https://chatgpt.com/share/681f691b-6644-8012-b07d-207c68f259d5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Topic 3: Environment Variables and the `.env` file\n",
"\n",
"This tutorial walks you through everything you need to know about .env files!\n",
"\n",
"Obiously you don't need to add the .env file to .gitignore, as I've already done that for you. But it hopefully explains the point well.\n",
"\n",
"https://chatgpt.com/share/68061e89-dd84-8012-829d-9f4506c7baaa"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Topic 4: Networking basics\n",
"\n",
"This tutorial covers networking and typical issues with certificates, VPNs, DNS and the like.\n",
"\n",
"The sections give a summary; you should ask ChatGPT to expand on any section if it's relevant to your situation.\n",
"\n",
"https://chatgpt.com/share/680620ec-3b30-8012-8c26-ca86693d0e3d\n",
"\n",
"This is a more in-depth guide to tackling SSL / certificate issues, which is common in corporate environments:\n",
"\n",
"https://chatgpt.com/share/68c2efc4-0280-8012-933b-5e89d7db6b58"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Topic 5: APIs and Client Libraries - foundational briefing\n",
"\n",
"We use APIs a lot in this course!\n",
"\n",
"It's essential to understand the fundamentals of what's going on when we make a call to an API, and to be comfortable with words like \"endpoint\" and \"client library\".\n",
"\n",
"Please review this guide:\n",
"\n",
"https://chatgpt.com/share/68062432-43c8-8012-ad91-6311d4ad5858"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Topic 6: uv, package management, environment management\n",
"\n",
"This lays out the Dependency Management situation and why we love uv! And a crash course in how to use it.\n",
"\n",
"https://chatgpt.com/share/68c34d46-18a0-8012-8d65-0a0cce615912\n",
"\n",
"Note that this guide suggests `uv run python xxx` which works fine, but simply `uv run xxx` works too and is more common."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

377
guides/05_notebooks.ipynb Normal file
View File

@@ -0,0 +1,377 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5c291475-8c7c-461c-9b12-545a887b2432",
"metadata": {},
"source": [
"# Notebooks in Cursor\n",
"\n",
"This course makes heavy use of a brilliant thing called Notebooks (also known as Jupyter Notebooks or Labs.) Those from a traditional software engineering background may feel discomfort with the \"hacky\" nature of Notebooks, but I must assure you: part of working with AI is being comfortable being a Scientist. As a Scientist, there's a lot of exploration and experimentation. And Notebooks are ideal for this kind of activity.\n",
"\n",
"A notebook is a file with the extension \".ipynb\" which stands for IPython Notebook, an early name for these.\n",
"\n",
"## Briefing on Notebooks in Cursor\n",
"\n",
"First, here's a briefing on how this fits together, and how to create and run a notebook in Cursor:\n",
"\n",
"https://chatgpt.com/share/6806291a-25f0-8012-a08b-057acb5045ae\n",
"\n",
"\n",
"## A broader guide to Notebooks with examples\n",
"\n",
"The Notebook is a Data Science playground where you can easily write code and investigate the results. It's an ideal environment for: \n",
"- Research & Development\n",
"- Prototyping\n",
"- Learning (that's us!)\n",
"\n",
"The notebook consists of a series of square boxes called \"cells\". Some of them contain text, like this cell, and some of them contain code, like the cell below.\n",
"\n",
"First, you may need to click the `Select Kernel` button on the top right, and then pick `venv (Python 3.12.x)` or similar.\n",
"\n",
"Click in a cell with code and press `Shift + Return` (or `Shift + Enter`) to run the code and print the output.\n",
"\n",
"Do that now for the cell below this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33d37cd8-55c9-4e03-868c-34aa9cab2c80",
"metadata": {},
"outputs": [],
"source": [
"# Click anywhere in this cell and press Shift + Return\n",
"\n",
"2 + 2"
]
},
{
"cell_type": "markdown",
"id": "9e95df7b-55c6-4204-b8f9-cae83360fc23",
"metadata": {},
"source": [
"## Congrats!\n",
"\n",
"Now run the next cell which sets a value, followed by the cells after it to print the value"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "585eb9c1-85ee-4c27-8dc2-b4d8d022eda0",
"metadata": {},
"outputs": [],
"source": [
"# Set a value for a variable\n",
"\n",
"favorite_fruit = \"bananas\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "07792faa-761d-46cb-b9b7-2bbf70bb1628",
"metadata": {},
"outputs": [],
"source": [
"# The result of the last statement is shown after you run it\n",
"\n",
"favorite_fruit"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a067d2b1-53d5-4aeb-8a3c-574d39ff654a",
"metadata": {},
"outputs": [],
"source": [
"# Use the variable\n",
"\n",
"print(f\"My favorite fruit is {favorite_fruit}\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4c5a4e60-b7f4-4953-9e80-6d84ba4664ad",
"metadata": {},
"outputs": [],
"source": [
"# Now change the variable\n",
"\n",
"favorite_fruit = f\"anything but {favorite_fruit}\""
]
},
{
"cell_type": "markdown",
"id": "9442d5c9-f57d-4839-b0af-dce58646c04f",
"metadata": {},
"source": [
"## Now go back and rerun the cell with the print statement, two cells back\n",
"\n",
"See how it prints something different, even though favorite_fruit was changed further down in the notebook? \n",
"\n",
"The order that code appears in the notebook doesn't matter. What matters is the order that the code is **executed**. There's a python process sitting behind this notebook in which the variables are being changed.\n",
"\n",
"This catches some people out when they first use notebooks."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e5ec81d-7c5b-4025-bd2e-468d67b581b6",
"metadata": {},
"outputs": [],
"source": [
"# Then run this cell twice, and see if you understand what's going on\n",
"\n",
"print(f\"My favorite fruit is {favorite_fruit}\")\n",
"\n",
"favorite_fruit = \"apples\""
]
},
{
"cell_type": "markdown",
"id": "a29dab2d-bab9-4a54-8504-05e62594cc6f",
"metadata": {},
"source": [
"# Explaining the 'kernel'\n",
"\n",
"Sitting behind this notebook is a Python process which executes each cell when you run it. That Python process is known as the Kernel. Each notebook has its own separate Kernel.\n",
"\n",
"You can click the button above \"Restart Kernel\".\n",
"\n",
"If you then try to run the next cell, you'll get an error, because favorite_fruit is no longer defined. You'll need to run the cells from the top of the notebook again. Then the next cell should run fine."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "84b1e410-5eda-4e2c-97ce-4eebcff816c5",
"metadata": {},
"outputs": [],
"source": [
"print(f\"My favorite fruit is {favorite_fruit}\")"
]
},
{
"cell_type": "markdown",
"id": "4d4188fc-d9cc-42be-8b4e-ae8630456764",
"metadata": {},
"source": [
"# Adding and removing cells\n",
"\n",
"Click in this cell, then click the \\[+ Code\\] button in the toolbar above to create a new cell immediately below this one. Copy and paste in the code in the prior cell, then run it! There are also icons in the top right of the selected cell to delete it (bin).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce258424-40c3-49a7-9462-e6fa25014b03",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "30e71f50-8f01-470a-9d7a-b82a6cef4236",
"metadata": {},
"source": [
"# Cell output\n",
"\n",
"When you execute a cell, the standard output and the result of the last statement is written to the area immediately under the code, known as the 'cell output'. When you save a Notebook from the file menu (or ctrl+S or command+S), the output is also saved, making it a useful record of what happened.\n",
"\n",
"You can clean this up by clicking \"Clear All Outputs\" in the toolbar. It's a good idea to clear outputs before you push code to a repo like GitHub, otherwise the files can be large and harder to read."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a4d021e2-c284-411f-8ab1-030530cfbe72",
"metadata": {},
"outputs": [],
"source": [
"spams = [\"spam\"] * 1000\n",
"print(spams)\n",
"\n",
"# Might be worth clearing output after running this!"
]
},
{
"cell_type": "markdown",
"id": "eac060f2-7a71-46e7-8235-b6ad0a76f5f8",
"metadata": {},
"source": [
"# Using markdown\n",
"\n",
"So what's going on with these areas with writing in them, like this one? Well, there's actually a different kind of cell called a 'Markdown' cell for adding explanations like this. Click the [+ Markdown] button to add a new markdown cell.\n",
"\n",
"Add some comments using Markdown format, perhaps copying and pasting from here:\n",
"\n",
"```\n",
"# This is a heading\n",
"## This is a sub-head\n",
"### And a sub-sub-head\n",
"\n",
"I like Jupyter Lab because it's\n",
"- Easy\n",
"- Flexible\n",
"- Satisfying\n",
"```\n",
"\n",
"And to turn this into formatted text simply with Shift+Return in the cell.\n",
"Click in the cell and press the Bin icon if you want to remove it."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e1586320-c90f-4f22-8b39-df6865484950",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "1330c83c-67ac-4ca0-ac92-a71699e0c31b",
"metadata": {},
"source": [
"# The exclamation point\n",
"\n",
"There's a super useful feature of jupyter labs; you can type a command with a ! in front of it in a code cell, like:\n",
"\n",
"!ls \n",
"!pwd\n",
"\n",
"And it will run it at the command line (as if in Windows Powershell or Mac Terminal) and print the result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82042fc5-a907-4381-a4b8-eb9386df19cd",
"metadata": {},
"outputs": [],
"source": [
"# list the current directory\n",
"\n",
"!ls"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4fc3e3da-8a55-40cc-9706-48bf12a0e20e",
"metadata": {},
"outputs": [],
"source": [
"# ping cnn.com - press the stop / interrupt button in the toolbar when you're bored\n",
"\n",
"!ping cnn.com"
]
},
{
"cell_type": "markdown",
"id": "4688baaf-a72c-41b5-90b6-474cb24790a7",
"metadata": {},
"source": [
"# Minor things we encounter on the course\n",
"\n",
"This isn't necessarily a feature of notebooks, but it's a nice package to know about that is useful in notebooks.\n",
"\n",
"The package `tqdm` will print a nice progress bar if you wrap any iterable."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "2646a4e5-3c23-4aee-a34d-d623815187d2",
"metadata": {},
"outputs": [],
"source": [
"# Here's some code with no progress bar\n",
"# It will take 10 seconds while you wonder what's happpening..\n",
"\n",
"import time\n",
"\n",
"spams = [\"spam\"] * 1000\n",
"\n",
"for spam in spams:\n",
" time.sleep(0.01)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6e96be3d-fa82-42a3-a8aa-b81dd20563a5",
"metadata": {},
"outputs": [],
"source": [
"# And now, with a nice little progress bar:\n",
"\n",
"import time\n",
"from tqdm import tqdm\n",
"\n",
"spams = [\"spam\"] * 1000\n",
"\n",
"for spam in tqdm(spams):\n",
" time.sleep(0.01)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "63c788dd-4618-4bb4-a5ce-204411a38ade",
"metadata": {},
"outputs": [],
"source": [
"# On a different topic, here's a useful way to print output in markdown\n",
"\n",
"from IPython.display import Markdown, display\n",
"\n",
"display(Markdown(\"# This is a big heading!\\n\\n- And this is a bullet-point\\n- So is this\\n- Me, too!\"))\n"
]
},
{
"cell_type": "markdown",
"id": "9d14c1fb-3321-4387-b6ca-9af27676f980",
"metadata": {},
"source": [
"# That's it! You're up to speed on Notebooks / Labs in Cursor.\n",
"\n",
"## Want to be even more advanced?\n",
"\n",
"If you want to become a pro at Jupyter Lab (the technology behind this), you can read their tutorial [here](https://jupyterlab.readthedocs.io/en/latest/). But this isn't required for our course; just a good technique for hitting Shift + Return and enjoying the result!"
]
},
{
"cell_type": "markdown",
"id": "db6e47b6",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,97 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5c291475-8c7c-461c-9b12-545a887b2432",
"metadata": {},
"source": [
"# Foundations of Python and fixing NameErrors\n",
"\n",
"## First, for a complete beginner\n",
"\n",
"This introductory tutorial gets you started with the basics.\n",
"\n",
"https://chatgpt.com/share/68063082-c2d8-8012-8d45-fa674aa1c1ed\n"
]
},
{
"cell_type": "markdown",
"id": "542f0577-a826-4613-a5d7-4170e9666d04",
"metadata": {},
"source": [
"## Next: a comprehensive briefing on Python\n",
"\n",
"I'm going to defer to an AI friend for this, because these explanations are so well written with great examples. Copy and paste the code examples into a new cell to give them a try. Pick whichever section(s) you'd like to brush up on.\n",
"\n",
"**Python imports:** \n",
"https://chatgpt.com/share/672f9f31-8114-8012-be09-29ef0d0140fb\n",
"\n",
"**Python functions** including default arguments: \n",
"https://chatgpt.com/share/672f9f99-7060-8012-bfec-46d4cf77d672\n",
"\n",
"**Python strings**, including slicing, split/join, replace and literals: \n",
"https://chatgpt.com/share/672fb526-0aa0-8012-9e00-ad1687c04518\n",
"\n",
"**Python f-strings** including number and date formatting: \n",
"https://chatgpt.com/share/672fa125-0de0-8012-8e35-27918cbb481c\n",
"\n",
"**Python lists, dicts and sets**, including the `get()` method: \n",
"https://chatgpt.com/share/672fa225-3f04-8012-91af-f9c95287da8d\n",
"\n",
"**Python files** including modes, encoding, context managers, Path, glob.glob: \n",
"https://chatgpt.com/share/673b53b2-6d5c-8012-a344-221056c2f960\n",
"\n",
"**Python classes:** \n",
"https://chatgpt.com/share/672fa07a-1014-8012-b2ea-6dc679552715\n",
"\n",
"**Pickling Python objects and converting to JSON:** \n",
"https://chatgpt.com/share/673b553e-9d0c-8012-9919-f3bb5aa23e31"
]
},
{
"cell_type": "markdown",
"id": "f9e0f8e1-09b3-478b-ada7-c8c35003929b",
"metadata": {},
"source": [
"## With this in mind - understanding NameErrors in Python\n",
"\n",
"It's quite common to hit a NameError in python. With foundational knowledge, you should always feel equipped to debug a NameError and get to the bottom of it.\n",
"\n",
"The most common reason if you're working in a Notebook is that you haven't executed all the cells in order, so the Kernel does not have something defined.\n",
"\n",
"If you're unsure how to fix a NameError, please see this [initial guide](https://chatgpt.com/share/67958312-ada0-8012-a1d3-62b3a5fcbbfc) and this [second guide with exercises](https://chatgpt.com/share/67a57e0b-0194-8012-bb50-8ea76c5995b8), and work through them both until you have high confidence.\n",
"\n",
"There's some repetition here, so feel free to skip it if you're already confident.\n",
"\n",
"After this, a NameError should never give you problems again!"
]
},
{
"cell_type": "markdown",
"id": "f465c23b",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,65 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Vibe coding and debugging\n",
"\n",
"\"Vibe coding\" is the affectionate term for coding with the assistance of LLMs. It's particularly easy and wonderful to do with Cursor! But there are some good techniques to make it work well, which I cover on the course. Here are a few essential tips:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. If you're prompting an LLM like ChatGPT or Claude to write code, include the current date in your prompt, and instruct the LLM to generate code that uses the latest versions of APIs. This is particularly important with the OpenAI API! ChatGPT frequently tries to use outdated versions of its own API...\n",
"\n",
"2. Also in your prompts, ask LLMs to keep their code as short and simple as possible. LLMs seem to love to add lots of extra, unnecessary checks that clutters code and makes it hard to find problems.\n",
"\n",
"3. Ask the same question to multiple LLMs, and pick the answer that is clearest and simplest.\n",
"\n",
"4. Similar: take the answer from 1 LLM, and ask another LLM to verify it for correctness and whether it could be simpler or clearer.\n",
"\n",
"5. Ask an LLM to give you 3 variations for ways to solve a problem, and pick the one that is clearest."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Vibe Coding for a larger project"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Try to avoid having an LLM generate 100-200 lines of code or more; it will be so hard for you to debug and figure out what's going wrong (unless you're already knowledgable).\n",
"\n",
"Instead: start by breaking down your problem into small, independently testable steps, that are each relatively small. If you're not sure how to break down your problem - this is something you can ask the LLM to do!\n",
"\n",
"Then for each of these building blocks:\n",
"- Use the tips above to have the LLM build the code\n",
"- Also have the LLM write tests to test and verify the code\n",
"- Test it yourself and satisfy yourself that it's working correctly\n",
"\n",
"This will allow you to build a larger project with confidence.\n",
"\n",
"## The golden rule: start small, work incrementally!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

72
guides/08_debugging.ipynb Normal file
View File

@@ -0,0 +1,72 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Debugging Survival Guide\n",
"\n",
"## The key is to consistently reproduce your problem in 5-10 lines of code\n",
"\n",
"Hitting an exception can sometimes feel quite frustrating, particularly if you're not sure how to approach it. But this is where the best learning happens! Getting to the bottom of hard problems is a great way to learn, and can be quite satisfying.\n",
"\n",
"But I'm here to help if you can't!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Preliminaries\n",
"\n",
"Here is a briefing on Exceptions, Stack Traces, Exception Handling and more:\n",
"\n",
"https://chatgpt.com/share/6806383c-ab38-8012-b21f-61af665bb900\n",
"\n",
"Also: see [this guide](https://chatgpt.com/share/681f691b-6644-8012-b07d-207c68f259d5) if you're unsure how to take screenshots."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Formula for debugging: recreate in 10 lines of code\n",
"\n",
"There is an art and a science to great debugging. The best way to pick it up is by practice! But here are the essentials:\n",
"\n",
"1. Recreate \n",
"Reproduce your problem, consistently\n",
"\n",
"2. Simplify \n",
"Reduce your problem down to the simplest possible way to reproduce it. I can't tell you how many people have sent me 100+ lines of code and said \"Ed, this isn't working, please fix it\". That's not how it works! The first thing I would try is to reduce this to the core problem - ideally in 10-20 lines of code or less. In one case, I showed a student how to reproduce it with 2 lines of code!\n",
"\n",
"_The thing is, it's much easier for **you** to do this than anybody else._ So while it can be tedious, this is almost always the best way to start. Reduce down to a few lines of code. And by the way, when you do that, you often figure out the problem yourself anyway!\n",
"\n",
"3. Get help \n",
"Once you've recreated the problem with just a few lines of code, if you can't see what's happening - you should get help! It's worth asking ChatGPT and friends of course; give them the short example and the full stack trace.\n",
"\n",
"And reach out to me! I'm here to make your learning experience as awesome as possible and if you are stuck, I will help you get unstuck.\n",
"\n",
"### The best way to work with me for fastest fixes...\n",
"\n",
"1. Simplify the problem as much as possible, with an easy way to reproduce\n",
"2. Include the full stack trace, and a screenshot not a photo (see [this guide](https://chatgpt.com/share/681f691b-6644-8012-b07d-207c68f259d5) if you're unsure how to take screenshots)\n",
"3. Email me at ed@edwarddonner.com. Or best of all: if you've used Google Colab before, this is a really terrific way to share an issue, because it will be identical for me to reproduce, and I can fix it and share the fix with you directly.\n",
"\n",
"I look forward to helping you!\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,466 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM APIs and Ollama - beyond OpenAI\n",
"\n",
"_IMPORTANT: If you're not as familiar with APIs in general, and with Environment Variables on your PC or Mac, please review the APIs section in Guide 4 Technical Foundations before proceeding with this guide (topics 3 and 5 in Guide 4)._\n",
"\n",
"## Crucial context for using models other than OpenAI - please read this first!\n",
"\n",
"Throughout the course, we use APIs for connecting with the strongest LLMs on the planet.\n",
"\n",
"The companies behind these LLMs, such as OpenAI, Anthropic, Google and DeepSeek, have built web endpoints. You call their models by making an HTTP request to a Web Address and passing in all the information about your prompts.\n",
"\n",
"But it would be painful if we needed to build HTTP requests every time we wanted to call an API.\n",
"\n",
"To make this simple, the team at OpenAI wrote a python utility known as a \"Python Client Library\" which wraps the HTTP call. So you write python code and it calls the web.\n",
"\n",
"And THAT is what the library `openai` is.\n",
"\n",
"### What is the `openai` python client library\n",
"\n",
"It is:\n",
"- A lightweight python utility\n",
"- Turns your python requests into an HTTP call\n",
"- Converts the results coming back from the HTTP call into python objects\n",
"\n",
"### What it is NOT\n",
"\n",
"- It's not got any code to actually run a Large Language Model! No GPT code! It just makes a web request\n",
"- There's no scientific computing code, and nothing particularly specialized for OpenAI\n",
"\n",
"### How to use it:\n",
"\n",
"```python\n",
"# Create an OpenAI python client for making web calls to OpenAI\n",
"openai = OpenAI()\n",
"\n",
"# Make the call\n",
"response = openai.chat.completions.create(model=\"gpt-4.1-mini\", messages=[{\"role\":\"user\", \"content\": \"what is 2+2?\"}])\n",
"\n",
"# Print the result\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"### What does this do\n",
"\n",
"When you make the python call: `openai.chat.completions.create()` \n",
"It simply makes a web request to this url: `https://api.openai.com/v1/chat/completions` \n",
"And it converts the response to python objects.\n",
"\n",
"That's it.\n",
"\n",
"Here's the API documentation if you make [direct web HTTP calls](https://platform.openai.com/docs/guides/text?api-mode=chat&lang=curl) \n",
"And here's the same API documentation if you use the [Python Client Library](https://platform.openai.com/docs/guides/text?api-mode=chat&lang=python)\n",
"\n",
"## With that context - how do I use other LLMs?\n",
"\n",
"It turns out - it's super easy!\n",
"\n",
"All the other major LLMs have API endpoints that are compatible with OpenAI.\n",
"\n",
"And so OpenAI did everyone a favor: they said, hey look - you can all use our utility for converting python to web requests. We'll allow you to change the utility from calling `https://api.openai/com/v1` to calling any web address that you specify.\n",
"\n",
"And so you can use the OpenAI utility even for calling models that are NOT OpenAI, like this:\n",
"\n",
"`not_actually_openai = OpenAI(base_url=\"https://somewhere.completely.different/\", api_key=\"another_providers_key\")`\n",
"\n",
"It's important to appreciate that this OpenAI code is just a utility for making HTTP calls to endpoints. So even though we're using code from the OpenAI team, we can use it to call models other than OpenAI.\n",
"\n",
"Here are all the OpenAI-compatible endpoints from the major providers. It even includes using Ollama, locally. Ollama provides an endpoint on your local machine, and they made it OpenAI compatible too - very convenient.\n",
"\n",
"```python\n",
"ANTHROPIC_BASE_URL = \"https://api.anthropic.com/v1/\"\n",
"DEEPSEEK_BASE_URL = \"https://api.deepseek.com/v1\"\n",
"GEMINI_BASE_URL = \"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
"GROK_BASE_URL = \"https://api.x.ai/v1\"\n",
"GROQ_BASE_URL = \"https://api.groq.com/openai/v1\"\n",
"OPENROUTER_BASE_URL = \"https://openrouter.ai/api/v1\"\n",
"OLLAMA_BASE_URL = \"http://localhost:11434/v1\"\n",
"```\n",
"\n",
"## Here are examples for Gemini, DeepSeek, Ollama and OpenRouter\n",
"\n",
"### Example 1: Using Gemini instead of OpenAI\n",
"\n",
"1. Visit Google Studio to set up an account: https://aistudio.google.com/ \n",
"2. Add your key as GOOGLE_API_KEY to your `.env` \n",
"3. Also add it a second time as GEMINI_API_KEY to your `.env` - this will be helpful later.\n",
"\n",
"Then:\n",
"\n",
"```python\n",
"import os\n",
"from openai import OpenAI\n",
"from dotenv import load_dotenv\n",
"load_dotenv(override=True)\n",
"\n",
"GEMINI_BASE_URL = \"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
"google_api_key = os.getenv(\"GOOGLE_API_KEY\")\n",
"gemini = OpenAI(base_url=GEMINI_BASE_URL, api_key=google_api_key)\n",
"response = gemini.chat.completions.create(model=\"gemini-2.5-flash-preview-05-20\", messages=[{\"role\":\"user\", \"content\": \"what is 2+2?\"}])\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"### Example 2: Using DeepSeek API instead of OpenAI (cheap, and only $2 upfront)\n",
"\n",
"1. Visit DeepSeek API to set up an account: https://platform.deepseek.com/ \n",
"2. You will need to add an initial $2 minimum balance. \n",
"3. Add your key as DEEPSEEK_API_KEY to your `.env` \n",
"\n",
"Then:\n",
"\n",
"```python\n",
"import os\n",
"from openai import OpenAI\n",
"from dotenv import load_dotenv\n",
"load_dotenv(override=True)\n",
"\n",
"DEEPSEEK_BASE_URL = \"https://api.deepseek.com/v1\"\n",
"deepseek_api_key = os.getenv(\"DEEPSEEK_API_KEY\")\n",
"deepseek = OpenAI(base_url=DEEPSEEK_BASE_URL, api_key=deepseek_api_key)\n",
"response = deepseek.chat.completions.create(model=\"deepseek-chat\", messages=[{\"role\":\"user\", \"content\": \"what is 2+2?\"}])\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"### Example 3: Using Ollama to be free and local instead of OpenAI\n",
"\n",
"Ollama allows you to run models locally; it provides an OpenAI compatible API on your machine. \n",
"There's no API key for Ollama; there's no third party with your credit card, so no need for any kind of key.\n",
"\n",
"1. If you're new to Ollama, install it by following the instructions here: https://ollama.com \n",
"2. Then in a Cursor Terminal, do `ollama run llama3.2` to chat with Llama 3.2 \n",
"BEWARE: do not use llama3.3 or llama4 - these are massive models not designed for home computing! They will fill up your disk. \n",
"\n",
"Then:\n",
"\n",
"```python\n",
"!ollama pull llama3.2\n",
"\n",
"from openai import OpenAI\n",
"\n",
"OLLAMA_BASE_URL = \"http://localhost:11434/v1\"\n",
"ollama = OpenAI(base_url=OLLAMA_BASE_URL, api_key=\"anything\")\n",
"response = ollama.chat.completions.create(model=\"llama3.2\", messages=[{\"role\":\"user\", \"content\": \"what is 2+2?\"}])\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"### Example 4: Using the popular service [OpenRouter](https://openrouter.ai) which has an easier billing process instead of OpenAI\n",
"\n",
"OpenRouter is very convenient: it gives you free access to many models, and easy access with small upfront to paid models.\n",
"\n",
"1. Sign up at https://openrouter.ai\n",
"2. Add the minimum upfront balance as needed\n",
"3. Add your key as OPENROUTER_API_KEY to your `.env` file\n",
"\n",
"Then:\n",
"\n",
"```python\n",
"import os\n",
"from openai import OpenAI\n",
"from dotenv import load_dotenv\n",
"load_dotenv(override=True)\n",
"\n",
"OPENROUTER_BASE_URL = \"https://openrouter.ai/api/v1\"\n",
"openrouter_api_key = os.getenv(\"OPENROUTER_API_KEY\")\n",
"openrouter = OpenAI(base_url=OPENROUTER_BASE_URL, api_key=openrouter_api_key)\n",
"response = openrouter.chat.completions.create(model=\"openai/gpt-4.1-nano\", messages=[{\"role\":\"user\", \"content\": \"what is 2+2?\"}])\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"\n",
"### Using different API providers with Agent Frameworks\n",
"\n",
"The Agent Frameworks make it easy to switch between these providers. You can switch LLMs and pick different ones at any point in the course. There are more notes below on each of them. For OpenAI Agents SDK, see a section later in this notebook. For CrewAI, we cover it on the course, but it's easy: just use the full path to the model that LiteLLM expects."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Costs of APIs\n",
"\n",
"The cost of each API call is very low indeed - most calls to models we use on this course are fractions of cents.\n",
"\n",
"But it's extremely important to note:\n",
"\n",
"1. A complex Agentic project could involve many LLM calls - perhaps 20-30 - and so it can add up. It's important to set limits and monitor usage.\n",
"\n",
"2. With Agentic AI, there is a risk of Agents getting into a loop or carrying out more processing than intended. You should monitor your API usage, and never put more budget than you are comfortable with. Some APIs have an \"auto-refill\" setting that can charge automatically to your card - I strongly recommend you keep this off.\n",
"\n",
"3. You should only spend what you are comfortable with. There is a free alternative in Ollama that you can use as a replacement if you wish. DeepSeek, Gemini 2.5 Flash and gpt-4.1-nano are significantly cheaper.\n",
"\n",
"Keep in mind that these LLM calls typically involve trillions of floating point calculations - someone has to pay the electricity bills!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Ollama: Free alternative to Paid APIs (but please see Warning about llama version)\n",
"\n",
"Ollama is a product that runs locally on your machine. It can run open-source models, and it provides an API endpoint on your computer that is compatible with OpenAI.\n",
"\n",
"First, download Ollama by visiting:\n",
"https://ollama.com\n",
"\n",
"Then from your Terminal in Cursor (View menu >> Terminal), run this command to download a model:\n",
"\n",
"```shell\n",
"ollama pull llama3.2\n",
"```\n",
"\n",
"WARNING: Be careful not to use llama3.3 or llama4 - these are much larger models that are not suitable for home computers.\n",
"\n",
"And now, any time that we have code like: \n",
"`openai = OpenAI()` \n",
"You can use this as a direct replacement: \n",
"`openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')` \n",
"And also replace model names like **gpt-4o-mini** with **llama3.2**. \n",
"\n",
"You don't need to put anything in your .env file for this; with Ollama, everything is running on your computer. You're not calling out to a third party on the cloud, nobody has your credit card details, so there's no need for a secret key! The code `api_key='ollama'` above is only required because the OpenAI client library expects an api_key to be passed in, but the value is ignored by Ollama.\n",
"\n",
"Below is a full example:\n",
"\n",
"```python\n",
"# You need to do this one time on your computer\n",
"!ollama pull llama3.2\n",
"\n",
"from openai import OpenAI\n",
"MODEL = \"llama3.2\"\n",
"openai = OpenAI(base_url=\"http://localhost:11434/v1\", api_key=\"ollama\")\n",
"\n",
"response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[{\"role\": \"user\", \"content\": \"What is 2 + 2?\"}]\n",
")\n",
"\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"You will need to make similar changes to use Ollama within any of the Agent Frameworks - you should be able to google for an exact example, or ask me."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### OpenRouter: Convenient gateway platform for OpenAI and others\n",
"\n",
"OpenRouter is a third party service that allows you to connect to a wide range of LLMs, including OpenAI.\n",
"\n",
"It's known for having a simpler billing process that may be easier for some countries outside the US.\n",
"\n",
"First, check out their website: \n",
"https://openrouter.ai/\n",
"\n",
"Then, take a peak at their quickstart: \n",
"https://openrouter.ai/docs/quickstart\n",
"\n",
"And add your key to your .env file: \n",
"```shell\n",
"OPENROUTER_API_KEY=sk-or....\n",
"```\n",
"\n",
"And now, any time you have code like this: \n",
"```python\n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()\n",
"```\n",
"\n",
"You can replace it with code like this:\n",
"\n",
"```python\n",
"MODEL = \"openai/gpt-4o-mini\"\n",
"openrouter_api_key = os.getenv(\"OPENROUTER_API_KEY\")\n",
"openai = OpenAI(base_url=\"https://openrouter.ai/api/v1\", api_key=openrouter_api_key)\n",
"\n",
"response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[{\"role\": \"user\", \"content\": \"What is 2 + 2?\"}]\n",
")\n",
"\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"You will need to make similar changes to use OpenRouter within any of the Agent Frameworks - you should be able to google for an exact example, or ask me."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## OpenAI Agents SDK - specific instructions\n",
"\n",
"With OpenAI Agents SDK (weeks 2 and 6), it's particularly easy to use any model provided by OpenAI themselves. Simply pass in the model name:\n",
"\n",
"`agent = Agent(name=\"Jokester\", instructions=\"You are a joke teller\", model=\"gpt-4o-mini\")`\n",
"\n",
"You can also substitute in any other provider with an OpenAI compatible API. You do it in 3 steps like this:\n",
"\n",
"```python\n",
"DEEPSEEK_BASE_URL = \"https://api.deepseek.com/v1\"\n",
"deepseek_client = AsyncOpenAI(base_url=DEEPSEEK_BASE_URL, api_key=deepseek_api_key)\n",
"deepseek_model = OpenAIChatCompletionsModel(model=\"deepseek-chat\", openai_client=deepseek_client)\n",
"```\n",
"\n",
"And then you simply provide this model when you create an Agent.\n",
"\n",
"`agent = Agent(name=\"Jokester\", instructions=\"You are a joke teller\", model=deepseek_model)`\n",
"\n",
"And you can use a similar approach for any other OpenAI compatible API, with the same 3 steps:\n",
"\n",
"```python\n",
"# extra imports\n",
"from agents import OpenAIChatCompletionsModel\n",
"from openai import AsyncOpenAI\n",
"\n",
"# Step 1: specify the base URL endpoints where the provider offers an OpenAI compatible API\n",
"GEMINI_BASE_URL = \"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
"GROK_BASE_URL = \"https://api.x.ai/v1\"\n",
"GROQ_BASE_URL = \"https://api.groq.com/openai/v1\"\n",
"OPENROUTER_BASE_URL = \"https://openrouter.ai/api/v1\"\n",
"OLLAMA_BASE_URL = \"http://localhost:11434/v1\"\n",
"\n",
"# Step 2: Create an AsyncOpenAI object for that endpoint\n",
"gemini_client = AsyncOpenAI(base_url=GEMINI_BASE_URL, api_key=google_api_key)\n",
"grok_client = AsyncOpenAI(base_url=GROK_BASE_URL, api_key=grok_api_key)\n",
"groq_client = AsyncOpenAI(base_url=GROQ_BASE_URL, api_key=groq_api_key)\n",
"openrouter_client = AsyncOpenAI(base_url=OPENROUTER_BASE_URL, api_key=openrouter_api_key)\n",
"ollama_client = AsyncOpenAI(base_url=OLLAMA_BASE_URL, api_key=\"ollama\")\n",
"\n",
"# Step 3: Create a model object to provide when creating an Agent\n",
"gemini_model = OpenAIChatCompletionsModel(model=\"gemini-2.5-flash\", openai_client=gemini_client)\n",
"grok_3_model = OpenAIChatCompletionsModel(model=\"grok-3-mini-beta\", openai_client=openrouter_client)\n",
"llama3_3_model = OpenAIChatCompletionsModel(model=\"llama-3.3-70b-versatile\", openai_client=groq_client)\n",
"grok_3_via_openrouter_model = OpenAIChatCompletionsModel(model=\"x-ai/grok-3-mini-beta\", openai_client=openrouter_client)\n",
"llama_3_2_local_model = OpenAIChatCompletionsModel(model=\"llama3.2\", openai_client=ollama_client)\n",
"```\n",
"\n",
"### To use Azure with OpenAI Agents SDK\n",
"\n",
"See instructions here: \n",
"https://techcommunity.microsoft.com/blog/azure-ai-services-blog/use-azure-openai-and-apim-with-the-openai-agents-sdk/4392537\n",
"\n",
"Such as this:\n",
"```python\n",
"from openai import AsyncAzureOpenAI\n",
"from agents import set_default_openai_client\n",
"from dotenv import load_dotenv\n",
"import os\n",
" \n",
"# Load environment variables\n",
"load_dotenv()\n",
" \n",
"# Create OpenAI client using Azure OpenAI\n",
"openai_client = AsyncAzureOpenAI(\n",
" api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"),\n",
" api_version=os.getenv(\"AZURE_OPENAI_API_VERSION\"),\n",
" azure_endpoint=os.getenv(\"AZURE_OPENAI_ENDPOINT\"),\n",
" azure_deployment=os.getenv(\"AZURE_OPENAI_DEPLOYMENT\")\n",
")\n",
" \n",
"# Set the default OpenAI client for the Agents SDK\n",
"set_default_openai_client(openai_client)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CrewAI setup\n",
"\n",
"Here's Crew's docs for LLM connections with the model names to use for all models. As student Sadan S. pointed out (thank you!), it's worth knowing that for Google you need to use the environment variable `GEMINI_API_KEY` instead of `GOOGLE_API_KEY`:\n",
"\n",
"https://docs.crewai.com/concepts/llms\n",
"\n",
"And here's their tutorial with some more info:\n",
"\n",
"https://docs.crewai.com/how-to/llm-connections"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## LangGraph setup\n",
"\n",
"To use LangGraph with Ollama (and follow similar for other models): \n",
"https://python.langchain.com/docs/integrations/chat/ollama/#installation\n",
"\n",
"First add the package: \n",
"`uv add langchain-ollama`\n",
"\n",
"Then in the lab, make this replacement: \n",
"```python\n",
"from langchain_ollama import ChatOllama\n",
"# llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"llm = ChatOllama(model=\"gemma3:4b\")\n",
"```\n",
"\n",
"And obviously run `!ollama pull gemma3:4b` (or whichever model) beforehand.\n",
"\n",
"Many thanks to Miroslav P. for adding this, and to Arvin F. for the question!\n",
"\n",
"## LangGraph with other models\n",
"\n",
"Just follow the same recipe as above, but use any of the models from here: \n",
"https://python.langchain.com/docs/integrations/chat/\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AutoGen with other models\n",
"\n",
"Here's another contribution from Miroslav P. (thank you!) for using Ollama + local models with AutoGen, and Miroslav has a great example showing gemma3 performing well.\n",
"\n",
"```python\n",
"# model_client = OpenAIChatCompletionClient(model=\"gpt-4o-mini\")\n",
" \n",
"from autogen_ext.models.ollama import OllamaChatCompletionClient\n",
" \n",
"model_client = OllamaChatCompletionClient(\n",
" model=\"gemma3:4b\",\n",
" model_info={\n",
" \"vision\": True,\n",
" \"function_calling\": False,\n",
" \"json_output\": True,\n",
" \"family\": \"unknown\",\n",
" },\n",
")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Worth keeping in mind\n",
"\n",
"1. If you wish to use Ollama to run models locally, you may find that smaller models struggle with the more advanced projects. You'll need to experiment with different model sizes and capabilities, and plenty of patience may be needed to find something that works well. I expect several of our projects are too challenging for llama3.2. As an alternative, consider the free models on openrouter.ai, or the very cheap models that are almost free - like DeepSeek.\n",
"\n",
"2. Chat models often do better than Reasoning models because Reasoning models can \"over-think\" some assignments. It's important to experiment. Bigger isn't always better...\n",
"\n",
"3. It's confusing, but there are 2 different providers that sound similar! \n",
"- Grok is the LLM from Elon Musk's X\n",
"- Groq is a platform for fast inference of open source models\n",
"\n",
"A student pointed out to me that \"Groq\" came first!\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,482 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5c291475-8c7c-461c-9b12-545a887b2432",
"metadata": {},
"source": [
"# Intermediate Level Python\n",
"\n",
"## A briefing on more advanced features of Python\n",
"\n",
"This section assumes you're up to speed on the foundations - and now we cover some important features of python that we use on the course.\n",
"\n",
"1. Comprehensions \n",
"2. Generators \n",
"3. Sub-classes, Type Hints, Pydantic \n",
"4. Decorators\n",
"5. Docker (not really python, but we use it to run python code!)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5802e2f0-0ea0-4237-bbb7-f375a34260f0",
"metadata": {},
"outputs": [],
"source": [
"# First let's create some things:\n",
"\n",
"fruits = [\"Apples\", \"Bananas\", \"Pears\"]\n",
"\n",
"book1 = {\"title\": \"Great Expectations\", \"author\": \"Charles Dickens\"}\n",
"book2 = {\"title\": \"Bleak House\", \"author\": \"Charles Dickens\"}\n",
"book3 = {\"title\": \"An Book By No Author\"}\n",
"book4 = {\"title\": \"Moby Dick\", \"author\": \"Herman Melville\"}\n",
"\n",
"books = [book1, book2, book3, book4]"
]
},
{
"cell_type": "markdown",
"id": "9b941e6a-3658-4144-a8d4-72f5e72f3707",
"metadata": {},
"source": [
"# Part 1: List and dict comprehensions"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61992bb8-735d-4dad-8747-8c10b63aec82",
"metadata": {},
"outputs": [],
"source": [
"# Simple enough to start\n",
"\n",
"for fruit in fruits:\n",
" print(fruit)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c89c3842-9b74-47fa-8424-0fcb08e4177c",
"metadata": {},
"outputs": [],
"source": [
"# Let's make a new version of fruits\n",
"\n",
"fruits_shouted = []\n",
"for fruit in fruits:\n",
" fruits_shouted.append(fruit.upper())\n",
"\n",
"fruits_shouted"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4ec13b3a-9545-44f1-874a-2910a0663560",
"metadata": {},
"outputs": [],
"source": [
"# You probably already know this\n",
"# There's a nice Python construct called \"list comprehension\" that does this:\n",
"\n",
"fruits_shouted2 = [fruit.upper() for fruit in fruits]\n",
"fruits_shouted2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ecc08c3c-181d-4b64-a3e1-b0ccffc6c0cd",
"metadata": {},
"outputs": [],
"source": [
"# But you may not know that you can do this to create dictionaries, too:\n",
"\n",
"fruit_mapping = {fruit: fruit.upper() for fruit in fruits}\n",
"fruit_mapping"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "500c2406-00d2-4793-b57b-f49b612760c8",
"metadata": {},
"outputs": [],
"source": [
"# you can also use the if statement to filter the results\n",
"\n",
"fruits_with_longer_names_shouted = [fruit.upper() for fruit in fruits if len(fruit)>5]\n",
"fruits_with_longer_names_shouted"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38c11c34-d71e-45ba-945b-a3d37dc29793",
"metadata": {},
"outputs": [],
"source": [
"fruit_mapping_unless_starts_with_a = {fruit: fruit.upper() for fruit in fruits if not fruit.startswith('A')}\n",
"fruit_mapping_unless_starts_with_a"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5c97d8e8-31de-4afa-973e-28d8e5cab749",
"metadata": {},
"outputs": [],
"source": [
"# Another comprehension\n",
"\n",
"[book['title'] for book in books]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "50be0edc-a4cd-493f-a680-06080bb497b4",
"metadata": {},
"outputs": [],
"source": [
"# This code will fail with an error because one of our books doesn't have an author\n",
"\n",
"[book['author'] for book in books]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "53794083-cc09-4edb-b448-2ffb7e8495c2",
"metadata": {},
"outputs": [],
"source": [
"# But this will work, because get() returns None\n",
"\n",
"[book.get('author') for book in books]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b8e4b859-24f8-4016-8d74-c2cef226d049",
"metadata": {},
"outputs": [],
"source": [
"# And this variation will filter out the None\n",
"\n",
"[book.get('author') for book in books if book.get('author')]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c44bb999-52b4-4dee-810b-8a400db8f25f",
"metadata": {},
"outputs": [],
"source": [
"# And this version will convert it into a set, removing duplicates\n",
"\n",
"set([book.get('author') for book in books if book.get('author')])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80a65156-6192-4bb4-b4e6-df3fdc933891",
"metadata": {},
"outputs": [],
"source": [
"# And finally, this version is even nicer\n",
"# curly braces creates a set, so this is a set comprehension\n",
"\n",
"{book.get('author') for book in books if book.get('author')}"
]
},
{
"cell_type": "markdown",
"id": "c100e5db-5438-4715-921c-3f7152f83f4a",
"metadata": {},
"source": [
"# Part 2: Generators\n",
"\n",
"We use Generators in the course because AI models can stream back results.\n",
"\n",
"If you've not used Generators before, please start with this excellent intro from ChatGPT:\n",
"\n",
"https://chatgpt.com/share/672faa6e-7dd0-8012-aae5-44fc0d0ec218\n",
"\n",
"Try pasting some of its examples into a cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1efc26fa-9144-4352-9a17-dfec1d246aad",
"metadata": {},
"outputs": [],
"source": [
"# First define a generator; it looks like a function, but it has yield instead of return\n",
"\n",
"import time\n",
"\n",
"def come_up_with_fruit_names():\n",
" for fruit in fruits:\n",
" time.sleep(1) # thinking of a fruit\n",
" yield fruit"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eac338bb-285c-45c8-8a3e-dbfc41409ca3",
"metadata": {},
"outputs": [],
"source": [
"# Then use it\n",
"\n",
"for fruit in come_up_with_fruit_names():\n",
" print(fruit)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f6880578-a3de-4502-952a-4572b95eb9ff",
"metadata": {},
"outputs": [],
"source": [
"# Here's another one\n",
"\n",
"def authors_generator():\n",
" for book in books:\n",
" if book.get(\"author\"):\n",
" yield book.get(\"author\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9e316f02-f87f-441d-a01f-024ade949607",
"metadata": {},
"outputs": [],
"source": [
"# Use it\n",
"\n",
"for author in authors_generator():\n",
" print(author)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7535c9d0-410e-4e56-a86c-ae6c0e16053f",
"metadata": {},
"outputs": [],
"source": [
"# Here's the same thing written with list comprehension\n",
"\n",
"def authors_generator():\n",
" for author in [book.get(\"author\") for book in books if book.get(\"author\")]:\n",
" yield author"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dad34494-0f6c-4edb-b03f-b8d49ee186f2",
"metadata": {},
"outputs": [],
"source": [
"# Use it\n",
"\n",
"for author in authors_generator():\n",
" print(author)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "abeb7e61-d8aa-4af0-b05a-ae17323e678c",
"metadata": {},
"outputs": [],
"source": [
"# Here's a nice shortcut\n",
"# You can use \"yield from\" to yield each item of an iterable\n",
"\n",
"def authors_generator():\n",
" yield from [book.get(\"author\") for book in books if book.get(\"author\")]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "05b0cb43-aa83-4762-a797-d3beb0f22c44",
"metadata": {},
"outputs": [],
"source": [
"# Use it\n",
"\n",
"for author in authors_generator():\n",
" print(author)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fdfea58e-d809-4dd4-b7b0-c26427f8be55",
"metadata": {},
"outputs": [],
"source": [
"# And finally - we can replace the list comprehension with a set comprehension\n",
"\n",
"def unique_authors_generator():\n",
" yield from {book.get(\"author\") for book in books if book.get(\"author\")}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3e821d08-97be-4db9-9a5b-ce5dced3eff8",
"metadata": {},
"outputs": [],
"source": [
"# Use it\n",
"\n",
"for author in unique_authors_generator():\n",
" print(author)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "905ba603-15d8-4d01-9a79-60ec293d7ca1",
"metadata": {},
"outputs": [],
"source": [
"# And for some fun - press the stop button in the toolbar when bored!\n",
"# It's like we've made our own Large Language Model... although not particularly large..\n",
"# See if you understand why it prints a letter at a time, instead of a word at a time. If you're unsure, try removing the keyword \"from\" everywhere in the code.\n",
"\n",
"import random\n",
"import time\n",
"\n",
"pronouns = [\"I\", \"You\", \"We\", \"They\"]\n",
"verbs = [\"eat\", \"detest\", \"bathe in\", \"deny the existence of\", \"resent\", \"pontificate about\", \"juggle\", \"impersonate\", \"worship\", \"misplace\", \"conspire with\", \"philosophize about\", \"tap dance on\", \"dramatically renounce\", \"secretly collect\"]\n",
"adjectives = [\"turqoise\", \"smelly\", \"arrogant\", \"festering\", \"pleasing\", \"whimsical\", \"disheveled\", \"pretentious\", \"wobbly\", \"melodramatic\", \"pompous\", \"fluorescent\", \"bewildered\", \"suspicious\", \"overripe\"]\n",
"nouns = [\"turnips\", \"rodents\", \"eels\", \"walruses\", \"kumquats\", \"monocles\", \"spreadsheets\", \"bagpipes\", \"wombats\", \"accordions\", \"mustaches\", \"calculators\", \"jellyfish\", \"thermostats\"]\n",
"\n",
"def infinite_random_sentences():\n",
" while True:\n",
" yield from random.choice(pronouns)\n",
" yield \" \"\n",
" yield from random.choice(verbs)\n",
" yield \" \"\n",
" yield from random.choice(adjectives)\n",
" yield \" \"\n",
" yield from random.choice(nouns)\n",
" yield \". \"\n",
"\n",
"for letter in infinite_random_sentences():\n",
" print(letter, end=\"\", flush=True)\n",
" time.sleep(0.02)"
]
},
{
"cell_type": "markdown",
"id": "04832ea2-2447-4473-a449-104f80e24d85",
"metadata": {},
"source": [
"# Exercise\n",
"\n",
"Write some python classes for the books example.\n",
"\n",
"Write a Book class with a title and author. Include a method has_author()\n",
"\n",
"Write a BookShelf class with a list of books. Include a generator method unique_authors()"
]
},
{
"cell_type": "markdown",
"id": "35760406-fe6c-41f9-b0c0-3e8cf73aafd0",
"metadata": {},
"source": [
"# Part 3: Sub-classes, Type Hints, Pydantic\n",
"\n",
"Here are some intermediate level details of Classes from our AI friend, including use of type hints, inheritance and class methods. This includes a Book example.\n",
"\n",
"https://chatgpt.com/share/67348aca-65fc-8012-a4a9-fd1b8f04ba59\n",
"\n",
"And here is a comprehensive tutorial on Pydantic classes covering everything you need to know about Pydantic.\n",
"\n",
"https://chatgpt.com/share/68064537-6cfc-8012-93e1-f7dd0932f321"
]
},
{
"cell_type": "markdown",
"id": "6bbc9c63",
"metadata": {},
"source": [
"## Part 4: Decorators\n",
"\n",
"Here is a briefing, with an example from OpenAI Agents SDK:\n",
"\n",
"https://chatgpt.com/share/6806474d-3880-8012-b2a2-87b3ee4489da"
]
},
{
"cell_type": "markdown",
"id": "0beef7e9",
"metadata": {},
"source": [
"## Part 5: Docker\n",
"\n",
"Here is a convenient tutorial to introduce Docker.\n",
"\n",
"In the last section, this also covers an answer to a question in Week 6 - what does it mean to run an MCP server in Docker? But you can ignore this question if you're not on week 6 yet.\n",
"\n",
"https://chatgpt.com/share/6814bc1d-2f3c-8012-9b18-dddc82ea421b"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "73e215b2",
"metadata": {},
"outputs": [],
"source": [
"# You need to install docker to run this example\n",
"# This will download the Docker image for python 3.12, create a container,\n",
"# Run some python code and print the result\n",
"\n",
"!docker run --rm python:3.12 python -c \"print(2 + 2)\""
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,156 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5c291475-8c7c-461c-9b12-545a887b2432",
"metadata": {},
"source": [
"# Async Python\n",
"\n",
"## A briefing on asynchronous python coding, essential in Agent engineering"
]
},
{
"cell_type": "markdown",
"id": "538fa044",
"metadata": {},
"source": [
"Here is a masterful tutorial by you-know-who with exercises and comparisons.\n",
"\n",
"https://chatgpt.com/share/680648b1-b0a0-8012-8449-4f90b540886c\n",
"\n",
"This includes how to run async code from a python module.\n",
"\n",
"### And now some examples:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "09f5662a",
"metadata": {},
"outputs": [],
"source": [
"# Let's define an async function\n",
"\n",
"import asyncio\n",
"\n",
"async def do_some_work():\n",
" print(\"Starting work\")\n",
" await asyncio.sleep(1)\n",
" print(\"Work complete\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "07ab3abf",
"metadata": {},
"outputs": [],
"source": [
"# What will this do?\n",
"\n",
"do_some_work()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6d681b6d",
"metadata": {},
"outputs": [],
"source": [
"# OK let's try that again!\n",
"\n",
"await do_some_work()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea867090",
"metadata": {},
"outputs": [],
"source": [
"# What's wrong with this?\n",
"\n",
"async def do_a_lot_of_work():\n",
" do_some_work()\n",
" do_some_work()\n",
" do_some_work()\n",
"\n",
"await do_a_lot_of_work()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e9c75c3f",
"metadata": {},
"outputs": [],
"source": [
"# Interesting warning! Let's fix it\n",
"\n",
"async def do_a_lot_of_work():\n",
" await do_some_work()\n",
" await do_some_work()\n",
" await do_some_work()\n",
"\n",
"await do_a_lot_of_work()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "720cf3f5",
"metadata": {},
"outputs": [],
"source": [
"# And now let's do it in parallel\n",
"# It's important to recognize that this is not \"multi-threading\" in the way that you may be used to\n",
"# The asyncio library is running on a single thread, but it's using a loop to switch between tasks while one is waiting\n",
"\n",
"async def do_a_lot_of_work_in_parallel():\n",
" await asyncio.gather(do_some_work(), do_some_work(), do_some_work())\n",
"\n",
"await do_a_lot_of_work_in_parallel()"
]
},
{
"cell_type": "markdown",
"id": "230f85de",
"metadata": {},
"source": [
"### Finally - try writing a python module that calls do_a_lot_of_work_in_parallel\n",
"\n",
"See the link at the top; you'll need something like this in your module:\n",
"\n",
"```python\n",
"if __name__ == \"__main__\":\n",
" asyncio.run(do_a_lot_of_work_in_parallel())\n",
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,97 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## How to start working on your idea\n",
"\n",
"So you've had an idea for something you'd like to build.\n",
"\n",
"CONGRATULATIONS! That's the most important part already done. The idea is everything!\n",
"\n",
"But now you're unsure where to begin. I get a lot of questions about this.\n",
"\n",
"I want to give you three essential pieces of advice.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advice 1: Be a Scientist\n",
"\n",
"Being an AI Engineer involves wearing 2 hats: being a Software Engineer, and a Data Scientist.\n",
"\n",
"At the start of your project, it's crucial to take off your Software Engineering hat and firmly put on your Data Science hat.\n",
"\n",
"A lot of people from a software engineering background (myself included!) struggle with this. It's common for people to fall into their comfort zone, asking questions about architecture, design, data pipes, scalability, deployment.\n",
"\n",
"These are all great questions, but they are not the critical ones at this early stage. They are not the hard questions. They won't make or break your idea.\n",
"\n",
"For the critical questions, you need to be a Scientist.\n",
"\n",
"Ask yourself: \n",
"1. How will you evaluate success of your model? What will you measure? \n",
"2. What data do you have, and what data do you need? \n",
"3. How can you build small prototypes to test different techniques and understand model performance?\n",
"\n",
"Address the science questions first - what you can achieve with LLMs and how.\n",
"\n",
"And I recommend starting in a Notebook before you work in python modules, to force yourself to operate with this scientific mindset.\n",
"\n",
"This is even more crucial with Agent projects. It can be tempting to draw up a big Agent architecture diagram on how your agents will collaborate, like a technical diagram. But this is thinking like a Software Engineer. I recommend approaching it differently; experiment with different approaches, investigate what works well, test hypotheses and iterate."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advice 2: Embrace R&D\n",
"\n",
"OK so this is pretty similar to Advice 1, but worth repeating!\n",
"\n",
"I often get asked questions like: \"Ed, I want to build the following solution. Should I use model A, B, or C? Should I use Agents, RAG or fine-tuning?\"\n",
"\n",
"My answer is almost always: \n",
"- You should do all of the above! The key is to experiment; (a) come up with an evaluation criteria, (b) develop a curated dataset, (c) test different ideas and see how they perform\n",
"- I often have an instinct for which will work best, but my instinct is often wrong! Don't trust me: try it yourself \n",
"- There's simply no substitute for experimentation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advice 3: Dream big but start SMALL!\n",
"\n",
"I also often have students send me 200-300 lines of code saying, \"Ed, this isn't performing, please fix it\". 😂\n",
"\n",
"As I say in the debugging guide - that's not how it works!\n",
"\n",
"When starting projects, it's crucial to start small and simple. Perfect every prompt; work in detail on every step. Satisfy yourself that each LLM call is performing as you wish, and iterate on the inputs until responses are consistent and reliable.\n",
"\n",
"If you always work incrementally and with small, testable building blocks, you should have complete clarity on what's going on.\n",
"\n",
"And of course, I love helping people with projects and it's great to help fix problems. But if you approach your project incrementally, you should always have clarity on exactly what isn't performing - and then it's super efficient for me to help you."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# In summary\n",
"\n",
"You probably noticed the common theme through these 3 pieces of advice. It can feel jarring for people from a non-Data Science background to approach problems in such an ad-hoc way; it feels \"hacky\" and unsatisfying. But in my opinion, this is the single most important skill to acquire to be a successful AI engineer: being comfortable with uncertainty, enjoying the experiments, and embracing being a Scientist."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

55
guides/13_frontend.ipynb Normal file
View File

@@ -0,0 +1,55 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "695d0c99",
"metadata": {},
"source": [
"# Your frontend crash course\n",
"\n",
"Here is a seven part series developed by you-know-who under my direction. It's very nicely done! You should take this in any direction you like.\n",
"\n",
"ChatGPT has written these in note form for someone who wants to experiment; you should ask it to rewrite in a more verbose way if you'd prefer deeper explanations.\n",
"\n",
"1. Refresher on modern HTML, CSS and JavaScript\n",
"\n",
"https://chatgpt.com/share/68c345ee-ae34-8012-9a05-0ce43a816f9c\n",
"\n",
"2. Modern JavaScript\n",
"\n",
"https://chatgpt.com/share/68c3495f-0c94-8012-b048-db1670aefa47\n",
"\n",
"3. From JavaScript to TypeScript\n",
"\n",
"https://chatgpt.com/share/68c34974-b768-8012-9926-5a9ed67d59c3\n",
"\n",
"4. React (TypeScript) basics\n",
"\n",
"https://chatgpt.com/share/68c3498d-e340-8012-b3a3-37a28802db6a\n",
"\n",
"5. React - state, props, hooks\n",
"\n",
"https://chatgpt.com/share/68c349c7-e75c-8012-b1e5-b4f51bb0e27c\n",
"\n",
"6. NextJS introduction\n",
"\n",
"https://chatgpt.com/share/68c349e7-e740-8012-8f0c-ba40ad7e148d\n",
"\n",
"7. NextJS rendering models and Tailwind\n",
"\n",
"https://chatgpt.com/share/68c34a13-3120-8012-9a67-f53d7a0cfc99\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,33 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "303a5c4f",
"metadata": {},
"source": [
"# Briefings on Docker and Terraform\n",
"\n",
"Here's a mutual friend giving an excellent self-study tutorial on Docker essentials:\n",
"\n",
"https://chatgpt.com/share/68c34f21-07bc-8012-b411-35cc97aff6ab\n",
"\n",
"And here is an equally excellent overview of Terraform to get you building infrastructure with code like a pro. No more AWS console!\n",
"\n",
"https://chatgpt.com/share/68c3513a-c194-8012-85c3-c0ea9143af5a"
]
},
{
"cell_type": "markdown",
"id": "6fe933fd",
"metadata": {},
"source": []
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}