38 lines
1.1 KiB
Plaintext
38 lines
1.1 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8fa27a71",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Week 7 Exercise: Fine-Tuning Gemma 3 Open Source LLM for Price Prediction\n",
|
|
"\n",
|
|
"## Project Overview\n",
|
|
"Fine-tuning an open-source Large Language Model to predict product prices based on textual descriptions.\n",
|
|
"\n",
|
|
"## Base Model\n",
|
|
"unsloth/gemma-3-4b-it\n",
|
|
"\n",
|
|
"## Dataset\n",
|
|
"ed-donner/pricer-data\n",
|
|
"\n",
|
|
"## Fine Tuning Details\n",
|
|
"* Fine-tuning Approach: LoRA (Low-Rank Adaptation)\n",
|
|
"* Target modules: All attention and MLP layers\n",
|
|
"* Framework: Unsloth + TRL (Transformer Reinforcement Learning)\n",
|
|
"* Chat template formatting for consistent input structure\n",
|
|
"* 4-bit quantization for memory-efficient training\n",
|
|
"\n",
|
|
"link to colab notebook : https://colab.research.google.com/drive/1sCQQ_OoR2kd1ASivfbUQx-lIE-wHIT6s?usp=sharing\n"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"language_info": {
|
|
"name": "python"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|