Clean up notebook to reduce file sizes

This commit is contained in:
lisekarimi
2025-06-07 05:08:01 +02:00
parent 6548e346bc
commit c1cf5675b1
2 changed files with 52 additions and 8581 deletions

View File

@@ -24,7 +24,8 @@
"\n",
"---\n",
"📢 Find more LLM notebooks on my [GitHub repository](https://github.com/lisekarimi/lexo)\n"
]
],
"outputs": []
},
{
"cell_type": "markdown",
@@ -42,7 +43,8 @@
"- 🧑‍💻 Local vs Remote code execution:\n",
" - Local code runs on your machine\n",
" - Remote code runs in the cloud (via Modal"
]
],
"outputs": []
},
{
"cell_type": "markdown",
@@ -53,7 +55,8 @@
"- Before starting, install Modal in your environment (Run this once): `uv pip install modal`\n",
"- Create an account at modal.com (they give you $5 free to start).\n",
"- Then authenticate your environment: `modal setup`"
]
],
"outputs": []
},
{
"cell_type": "code",
@@ -101,7 +104,8 @@
"- `modal.App`: Defines and registers the Modal app\n",
"- `.remote()`: Sends request to Modal API to execute the code remotely\n",
"- `modal deploy -m`: Deploys app permanently like a real cloud service"
]
],
"outputs": []
},
{
"cell_type": "markdown",
@@ -109,7 +113,8 @@
"metadata": {},
"source": [
"## ⚙️ Minimal Example"
]
],
"outputs": []
},
{
"cell_type": "code",
@@ -148,7 +153,8 @@
"What we have here is an **ephemeral app**: the container shuts down after finishing.\n",
"\n",
"For our project, we need a persistently running app that behaves like a production API. To achieve that, we should use `modal deploy -m`, making the app suitable for serving AI services reliably."
]
],
"outputs": []
},
{
"cell_type": "markdown",
@@ -156,7 +162,8 @@
"metadata": {},
"source": [
"## 📦 Persistent Deployment with `modal deploy`"
]
],
"outputs": []
},
{
"attachments": {
@@ -180,7 +187,8 @@
"This builds and deploys the app (`example-hello-world`), registers `f()`, and makes it callable via `.remote()` anytime — even outside the notebook.\n",
"\n",
"![image.png](attachment:b84a3557-9805-462f-a1d5-008b3aa4f4f5.png)"
]
],
"outputs": []
},
{
"cell_type": "code",
@@ -239,7 +247,8 @@
"- Prepare the app to handle `.remote()` calls when they come in\n",
"\n",
"![image.png](attachment:b950fed1-8806-424c-830a-d8b99927801e.png)"
]
],
"outputs": []
},
{
"attachments": {
@@ -279,7 +288,8 @@
"To save credits, you can set `min_containers=0` and `scaledown_window=300` — this way, no container stays warm by default, and a new one will spin up only when `.remote()` is called (i.e., on cold start).\n",
"\n",
"![image.png](attachment:4a22e438-6b25-4c69-9439-99d146ffd188.png)\n"
]
],
"outputs": []
},
{
"cell_type": "code",
@@ -303,7 +313,8 @@
"lets go one step further — wrap this logic inside a local Python class.\n",
"\n",
"In the next step, we'll build a local Agent that cleanly interacts with our deployed `Modal app`, using the same `Modal API` under the hood."
]
],
"outputs": []
},
{
"cell_type": "markdown",
@@ -324,7 +335,8 @@
"2. `.price.remote(...)` → runs the remote method on Modal \n",
"\n",
"This keeps our code clean and modular."
]
],
"outputs": []
},
{
"cell_type": "code",
@@ -347,7 +359,8 @@
"Now that weve seen how Modal agents work — connecting to remote services and running `.remote()` — well use the same pattern for the rest of our models.\n",
"\n",
"✅ For each model — **XGBoost**, **GPT-4o RAG**, and the **Ensemble** — well build a dedicated Agent. "
]
],
"outputs": []
}
],
"metadata": {
@@ -371,4 +384,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}